Using Vim as a SQL Server Client

Bad times - having to connect to my company's SQL Server database in order to migrate data. The last time I looked at that database it was through a connection to a Windows box running the Microsoft development client. Laggy, slow, and an unfamiliar environment.

So what are my options on a Mac? Well, there are a couple of SQL Server clients out there, but the one I tried out turned out to be buggy - there's nothing less fun than losing a query you've been crafting for an hour. And why should I spend time moving the query betweeen an editor (Vi) and an awkward GUI interface?

Screw that - let's do it in Vim! This method has been tested on my Mac, but should also work for another *nix based system. No guarantees though.

The Tools

To do this we need:

  • a way for the Mac to talk to the SQL Server database
  • a way for that connection to talk to Vim

And we will be using the following tools to do that:

We'll start with a simple example - connecting dbext to a local instance of PostgreSQL.

dbext with PostgreSQL

Install dbext into Vim using whichever package manager or other you like. Dbext is a 'proper' Vim extension, in that it comes with extensive documentation in the form of Vim helpfiles. Access the full help with :h dbext and check out the tutorial at :h dbext-tutorial.

If even the tutorial is fast enough, here's a quick start for PostgreSQL locally. Let's assume you've already installed PostgreSQL, and that you've not added any usernames or passwords. Somewhere in your Vim initialization files (like ~/.vimrc) add the following line:

let g:dbext_default_profile_local_PSQL = 'type=PGSQL'

Here we've set up a local profile for PostgreSQL, called it local_PSQL, and told dbext that the type of connection this profile will use is, well, Postgres. Now either restart Vim or evaluate that command in your current session.

To use this profile with some SQL in Vim, first put some SQL into a buffer. Try this: SELECT * FROM bob. Then, in the buffer, send the command :DBPromptForBufferParameters. You'll get a menu with the local_PSQL option on it. Select that option (probably option 1).

Now have a crack at running a query - put your cursor on the line with the SQL statement on it and execute the query with the command <Leader>sel (sql execute line).

This won't work, but should give you a hint about what's going wrong: PostgreSQL needs a [.pgpass] file for the password to the database - even if there isn't a password(!). So just create an empty file called .pgpass in your $HOME directory. Now it should work (or at least return a reasonable error message if your default database doesn't have a bob table).

Now... go crazy! Dbext is great - read some of the docs in full and have a play with some SQL queries. Ah, the joy of Vim! Come back when you want to hook it up to a remote SQL Server.

SQL Server on Mac

Had fun? Right, this bit is a bit of a drag. Macs (and other *nix systems) have no native support for TDS, the protocol by which we talk to SQL Server databases. So we'll have to install a library called freetds.

But before we do that we'll need to install an ODBC driver as the freetds library does not provide sufficiently sophisticated binaries for dbext to use to query a database directly.1

First up we'll install unixodbc - I'm using Homebrew, but if you'd like to build your own binaries be my guest.

brew install unixodbc

And now that's done, let's get hold of freetds

brew install freetds --with-unixodbc --with-msdblib

We're asking for freetds to be installed with unixodbc support, and telling it that we want it to use the version of TDS that Microsoft developed for SQL Server.2

If you'd like to test the installation, freetds comes with a couple of command line tools you can use to connect to a SQL Server DB. Run tsql with the appropriate server, port, username and password passed in as options and check to see if you can connect. Have a bit more of a play around if you can - you've earned it.

Perl: DBI and DBD::ODBC

The final step is to get dbext talking to the ODBC installation. And it wants to do this through a pair of Perl libraries. We'll install these using CPAN, the Perl equivalent of RubyGems or NPM.3 First start the interactive interface as the super user:

sudo perl -MCPAN -e shell

And then install the libraries we need:

install DBI
install DBD::ODBC

Both of these will output a scarily verbose amount of logs - it's ok, it's normal4 - and by the end of it we've got everything installed that we'll need. Almost there. Almost...

Configuration files

Getting freetds and unixodbc working together happily is super simple - but it took me a while to work out exactly what was needed. Configuration for freetds can live either in its own configuration file, or with the ODBC configuration. The simplest thing to do is to push the configuration over to the ODBC side entirely.

What we're looking to do it to tell ODBC that there is a sort of database called 'freeTDS' and to point it to where the files that describe the protocol live - this is the database 'driver', just like a printer driver. Then we need to give ODBC the details of the specific database we want to connect to - think of this as the specific printer you connect to using a printer driver, the network address etc.

The first step is to register freetds as a driver with unixodbc - this is done in a file called odbcinst.ini which Homebrew has (hopefully) symlinked into /usr/local/etc/odbcinst.ini5. And in that file we put the following:

Description = TD Driver (MSSQL)
Driver = /usr/local/lib/
Setup = /usr/local/lib/
FileUsage = 1

The top line is the name we're giving the driver, the second a human-friendly description of what it does. Thee next two lines give ODBC the driver and set up information - was installed with the freeTDS installation and put in /usr/local/lib as a symlink by Homebrew (again, hopefully).

That's the driver bit done. Now let's point ODBC to your SQL Server database by adding its details to the ~/.odbc.ini file, which you'll have to create.6 Put the following in there:

Driver = FreeTDS
Server = <ip or domain name goes here>
Database = <database name goes here>
Port = <port number>

This connection information is called a DSN7, and we'll be using it in dbext. Replace MyMSSQLDB with something more descriptive - it's the name of the connection to your database that ODBC (and, by extension, dbext) will use.

Success! One more small step to go

ODBC in dbext

Now we've got an ODBC connection to play with, it's time to put its details into dbext. This can bedone by putting the following into your .vimrc - right next to where you declared your PostgreSQL connection information.

let g:dbext_default_profile_MyMSSQLDB = 'type=ODBC:user=<username>:passwd=<password>:dsnname=MyMSSQLDB'8

Pretty long, right? But comprehendible. We're giving similar information to that which we used for the PostgreSQL connection, only we're declaring that the type is ODBC, and we're declaring the DSN name that we're using as well.

And that's it. Restart your Vim Session, <Leader>sbp (it's the same as :DBPromptForBufferParameters) and pick MyMSSQLDB (feel free to give it a better name later). You can now evaluate lines of SQL against the database, and see the return value in a separate split below.


  • brew install unixodbc
  • brew install freetds --with-unixodbc --with-msdblib
  • sudo perl -MCPAN -e shell
  • install DBI and install DBD::ODBC in the CPAN shell
  • Add the following to /usr/local/etc/odbcinst.ini:
Description = TD Driver (MSSQL)
Driver = /usr/local/lib/
Setup = /usr/local/lib/
FileUsage = 1
  • Add the following to ~/.odbc.ini:
Driver = FreeTDS
Server = <ip or domain name goes here>
Database = <database name goes here>
Port = <port number>
  • install dbext into Vim
  • Add let g:dbext_default_profile_MyMSSQLDB = 'type=ODBC:user=<username>:passwd=<password>:dsnname=MyMSSQLDB' to ~/.vimrc
  • Read the dbext manual (:h dbext)

  1. My details are fuzzy at best, but as far as I can see the osql and tsql bins that come with freetds are not set up for interactive querying, and can't be used in the same way that, say, osql on a Windows machine would work. 

  2. Even when making their own standard, M$ can't help but diverge from it. 

  3. Or Maven or whatever. 

  4. CPAN is running all the tests on each of the modules. Bit excessive I know. 

  5. This inforamation can also be added using the odbcinst tool, But this way seems easier to me. Read more about these files in the unixODBC documentation here 

  6. ODBC will also look in /usr/local/etc/odbc.ini for DSNs, but these will be available to all users. So we're putting them in the local user file it checks, ~/.odbc.ini

  7. Data Source Name - just so you know. 

  8. The connection information used here can include the database, but we've pushed that part down to the DSN defined above. It must always include the username and passwd from what I've seen through experimentation. 

Backing up your Homebrew packages

Update: while this is a good template to do a quick backup, a more flexible solution exists in the Brew Bundle project. Thanks @MacHomebrew for the pointer!

It's a good idea to keep track of what packages you've got installed in Homebrew - good for provisioning a new Mac, good for recovering from a disaster.

To get a list of the current packages is as simple as

brew ls

but that gives us everything, dependencies and all. If we just want what we explicitly installed, we should go for

brew leaves

like the leaves of our dependency tree.

Just pipe that out into a file

brew leaves > homebrew-packages.txt

for safekeeping and get it under version control along with the rest of you configuration files.

When it comes to recovery, we can save time and effort by using xargs to pipe out each of the lines as an argument to brew install

cat homebrew-packages.txt | xargs brew install

and everything will (re)install in one go. It may take some time.

Maybe you want to add to the list from one machine without overwriting the current list? I just did (for one reason or another), and it's fairly easy to handle. Instead of overwriting the text file, append to the end of it

brew leaves >> homebrew-packages.txt

Now you might have some repetitions in that file - get rid of them with

sort homebrew-packages.txt | uniq

This sorts the original list into order, then removes any lines that are repetitions of the one before, leaving only one. Pipe that out to a new file

sort homebrew-packages.txt | uniq > homebrew-packages-reconciled.txt

and overwrite the old one if you need to (just don't do it in the pipe - it doesn't like it and the file becomes blank. Boo.)

Did I mention that Text Processing with Ruby was an amazing book? I worked out how to do the above from what I read in the section on Unix tools. Nice.

Text Processing with Ruby by Rob Miller

Plain text is at the root of everything we can do as developers - we read it, we manipulate it, we write it back out as logs or files or HTML. We write it as code to do all that, and at the end of the day we use it to write blog posts like this. Rob Miller's Text Processing with Ruby shows you how to do it as quickly, and elegantly, as possible.

A standout feature is the book's use of command line tools. While there are obviously ways to count the number of lines in a file in Ruby, none are going to be as quick and as easy as cat file.txt | wc -l.1 The book breaks down and blurs the edges between shell tools, Ruby scripts and larger, more organised code in a really beautiful way.

We're shown tools like cat, grep and cut, and how to interoperate them with Ruby code written straight into the command line. And as this builds up we're then shown how to handle standard input and output inside Ruby programs, allowing us to create our own command line tools that will play nicely as a part of a pipeline with the rest of the Unix tool chain.

For me this stuff is worth reading the book alone. Leveraging forty years worth of text tools along side your Ruby code gets more done and faster. The part of the book that demonstrates how to pipe out of a process into less to generate paging output was one of the most amazing things I've seen done in Ruby to date

Regular expressions are covered in greater depth than I have seen in other Ruby books, and with a strong emphasis on their real-world application. By the time I'd finished these sections I felt like a real regex ninja.

The book also shows how to use Ruby's system variables (the ones starting with $) to keep regex and other code short. Some people (OK, my friend Andrea) dislike using anything starting with a $ in their code, and I can see their point as it can look a little esoteric and obscure. But it felt right to me in the context of this book; maybe you'd not want to use the system variables in larger, more modular software, but they're perfect for the short, command line scripts often used to process text.

We're also given a fun tour of parts of Ruby I've not seen - ERB templating (outside of a web framework), SimpleDelegator - and a few deep dives on popular text parsing and processing libraries such as the ever-present Nokogiri and StringScanner. Natural language processing and fuzzy matching using Phrasie and Text are shown off too.

I've found Text Processing with Ruby a great exploration of working with text both inside and outside of Ruby. Reading it has extended my knowledge of Ruby significantly, while giving me a wealth of new techniques and tools to use to manipulate text.

And that may be the biggest selling point of this book: I can apply it right away - I am literally using the things I've learned at work today. Perfect for the beginner to intermediate Rubyist, or any programmer who wants some standout techniques for handling text whatever language they're using.

  1. The author gave an interesting example of the power available in command line tools when I saw him speak at Brighton Ruby, comparing the processing speed of Hadoop against a laptop's UNIX tool chain using small (~2GB) data sets. You can read more about this comparison on Adam Drake's blog

Clojure Dojo

I've just got back from Clojure Dojo over at the Thoughtworks offices in Soho. The Clojure Dojos have been run by the London Clojurians for about five years now and are still going strong. I've been meaning to go to one for a while, and tonight the stars were right.

What's a Clojure Dojo? Developers gather in a room full of pizza1 with computers and chat about which editors they use.2 Then someone very sensibly says we should probably start.

First we introduce ourselves, tell everyone how much we know about Clojure and what the first language we learned was.3 There were a good mix of people in tonight, a few who had been doing Clojure for years, many for months, and a few like me who have been dabbling their toes in the water for a while. I was heartened to see a Maker there who had graduated on Friday - that takes some guts and enthusiasm.

Then we pitch ideas to hack on for the evening. They were all great ideas, from Conway's Game of Life to Google Maps plugins that show extinct species. The winner was a Twitter Markov Generator - an awesome idea that generates new tweets based upon a history of tweets, picking the next word based upon the likely hood that it follows from the previous word in the history.

We were split into random groups of 3-4 and - just got on with it. After about two hours the teams gathered together once more to show off their (completely unfinished) products. Then we all went home.

The subject sounded fairly dry to me, but it turned out to be plenty to get my teeth stuck in to - I'd almost recommend it as a kata. I was working with three other devs - Fabio, who'd been working on Clojure for about a year and had the most wicked Emacs/Cider4 set up that was a joy to watch him code with. There was also Francis who had been looking at Clojure 'for about two weeks', but had a Lisp background that just shone through. And happily we also had Chris, the convener of the Dojo who had been working in Clojure for four years or so. He helped a lot, and was great to chat with about practices, concepts and the style of Clojure.

I learned a lot in a short space of time, refreshing my brief exposure to the language quite quickly.5 But the best bit was when we planned the data structure. Initially we were looking at nested hash-map arrangement, with the inner map indicating the percentage chance of each next word - something like:

{ "Word" { "nextWord1" 1, "nextWord2" 10, "nextWord3" 79} }

But then Francis suggested that we just hold a list of a hundred words that represented the probability distribution:

{ "Word" ["nextWord1"
          "nextWord2" ;; ten times
          "nextWord3"] ;; eighty odd times

To which I responded that we may as well just collect up all of the next words and randomly sample from it. I thought this would make everyone go 'oh, but that'd be too long' but, no, everyone liked it. So massive, massive vectors it was.

My next favourite moment was when I wondered whether we'd be slowly feeding the tweets in one by one to a function to build the hash-map. No, Fabio (I think) said - we'll just combine them into one giant string and feed it in.

One. Giant string. Of all the tweets. Mind blown. Not quite big data, but I loved the epic processing powers it felt like we were harnessing - lightning of the gods and all that good stuff.

I discovered is all my practice in functional languages over the last few months means that I can put together a recursive function pretty quickly and accurately. Unfortunately I also discovered I can't come up with a decent name when typing - and I also found out that there's little appetite for a refactor in a Dojo. That function's name is my eternal shame.6

We had something fairly decent by the end of the Dojo. More exiciting was seeing what the other teams had produced. One team had made this glorious and terse looking almost one liner (and it really could have done with being put on a few more lines) that seemed to have more functionality than I could believe. Another team practiced their TDD7 and also created and elegant 'word shifting' move that provided the word-nextWord pairs without any recursion. And the final team wrote no code per se, but spent a productive few hours setting up a Clojure project and investigating the useful libraries that could help in the project.

I feel that I could see more of what the fuss about Clojure was about. Each team had thought about the nature of the data they were going to create and work on, linking it to the functions they were using to translate the input into the output. The beauty came in the diverse implementations, both in data and functions, and just how terse yet expressive both were. It's left me wanting more.

A friendly atmosphere, enthusiastic programmers, a great language and problem to play with - and did I mention the pizza? I'm already looking forward to the next one.

Many, many thanks to Chris Ford for organising and Thoughtworks for hosting!

  1. This is a widely used pattern that has proven highly performant in all languages... 😉 

  2. Developer smalltalk equivalent of 'how did you travel here?' 

  3. One ML and one Turbo Pascal. Nice. 

  4. I've also started to use Emacs - see the next footnote. 

  5. I've been doing a lot of Lisp recently, which I'll get to blogging about soon enough. 

  6. reduce-sum - seriously? It wasn't even summing anything 😢🐼 

  7. I'm not even going to get into whether we should have been TDDing or whether the repl can cover some of those requirements. I'll call it a massive spike for now and move on to TDDing it when I do it again. 

How to code (almost) everywhere

'Always be coding' was a piece of advice I was given early on when I was learning to be a developer, but it struck me today how incredibly easy it can be. The trick is to treat your computer like a newspaper or a novel. And the way to achieve this is to reduce the barriers (both mental and physical) that prevent you from writing.

With the requirements for a development environment being just a laptop - internet connection optional - you can start to hack on a problem anwhere you have access to a lap (preferably your own). That's anywhere you can sit down.

The only other physical limiting factor is how fast you can take out and put away your laptop. Reducing the friction involved in starting (and stopping) coding not only increases the time you have to dedicate to it, it opens up new places you can code, and allows you to form interesting new habits. Essentially you will become able to code in quick, short bursts when that becomes possible.

I use a simple soft case by Plemo which has a pair of integrated carry handles. I usually stash the case and Mac in my backpack, but the handles help me carry it on its own for quick access if I know I'm going to be stopping and starting a lot - say when changing trains on a commute. The Mac wakes up from sleep in about 2-3 seconds.

This all means that I can dip in and out of coding with the ease of reading a newspaper when travelling, when waiting for someone, when grabbing lunch - it's easy to pick up and put away in a matter of seconds. This means you can now code in shorter bursts - think of them as 'microsessions'.

A side benefit of this is that you will spend (even) more time thinking about code when away from the keyboard. The gaps between each will be filled with thoughts about what you just did and what you will do next. It encourages an increased refection on your practice as a result of the natural pauses and interruptions that occur in life, and not the artificial ones that are generated by techniques like Pommodoro.

Try taking your laptop everywhere for a week and see what the shortest viable time to do something useful on it is. I've managed to get something worthwhile done between two stops on the tube.

(And if you can't sit down you can always do some Clojure on your phone).