SMTP Troubles


For reasons some reasons I don’t entirely understand, and some that I do, I’ve been having SMTP server issues for awhile now.

The one I don’t entirely understand involves my hosting service, which provides SMTP services, but for some reason began blocking my server from connecting and sending email. I spent a couple of customer service sessions on the phone with them and wasn’t able to get any satisfying resolution. For some reason, my IP was blacklisted by them and I can’t appeal the IP’s, the owner of the IP’s have to. The problem is, I can’t get in touch with the owner of the IP’s because they don’t make any sense to me.

So I started using a different SMTP server (I have several emails accounts and thus have access to several of them). Unfortunately, the second one is not well administered and the SSL certificates for it recently expired, so I couldn’t connect to it.

I looked into ditching the whole ISP based SMTP server relationship and just setting up my mail server downstairs to use SMTP directly. The only problem is I don’t have a dedicated IP address. Rather, I have a DHCP issued one from my ISP and, while it can be stable for months at a time, it will change. Thus, my SMTP server would have a variable IP address and would cause problems with connecting to other email servers for mail delivery. Because of bulk-mailers and spammers, one of the first things email providers check for are dynamic IP addresses of SMTP servers, and they quickly block them.

I didn’t want to use Google or Yahoo because, well, I don’t particularly trust them. So that left me with one other option- finding a dedicated SMTP server service. Turns out, there are a number of them available and they each have their own take on services and pricing. They are all geared for bulk-emailing and marketing type emailing, but most have very low-cost plans that could suit a home need.

The big advantage for a home would be that, no matter where a person went, they could always connect to this SMTP service for email. I’ve actually already sort of done this- I just needed someone to help me send my emails reliably. After looking at services from the likes of mailjet, authsmtp, turbosmtp and several others, I opted to go with Easy-SMTP.

I went with their free service since it provides for 10,000 emails per month for an account and it plays nicely with my MTA, exim. It will also play nicely with just about any modern mail client. The other services either attached advertisements to emails or didn’t offer as much. Also, this one was clear about allowing multiple user email accounts to access the same account, which was a big deal since everyone’s mail would be using this server. I’m not saying the others don’t do that, it just wasn’t clear that they did. I was almost willing to pay for one of the services (less that $20 for a year for sending thousands of emails), but in the end easy-SMTP just seemed the best value.

The signup was extremely simple. I wish I could say I was up and running with it quickly, but I wasn’t. In this instance, I couldn’t blame easy-SMTP though. I had some latent configuration issues with my mail server setup which prevented it from working straight out of the box. I finally figured that issue out and my mail is once again working. Hopefully, easy-SMTP continues to as well.

This is Amazing


This is so indescribable that I had to link it here:

Alright, it’s not indescribable. It’s just a dot matrix printer playing Eye of the Tiger by Survivor. Yes, you read that right. It’s a dot matrix printer playing Eye of the Tiger by Survivor.

Having watched it a couple times, I wonder if it isn’t a gimmick of some sort. It sounds so musical I’m having a hard time believing a dot matrix printer could make those sounds. You don’t even have to squint to hear it.

On the other hand, if it’s legit, then that’s a virtuoso programming performance.

(hattip: Outside the Beltway)

All Hail the Smartphone!


I have a very simple website I’ve setup for use with my Cub Scout Pack. It’s for the parents and I’m able to accomplish things like news items and, most importantly, sign-up forms for various Pack activities. It’s a huge help this time of year because each Fall we have a big popcorn fund raiser.

One of the big activities we do to help boost sales for the boys and the Pack generally is hold “Show-n-Sells.” Various businesses allow Scouts to hang out in front of their store all day and try to sell popcorn to their customers. The generosity of people towards the boys never ceases to amaze me.

The main piece to any Show-n-Sell, aside from the popcorn to sell, is a couple of Scouts each hour to work their magic, as well as a parent to make sure they do work their magic. I’ve found that an online sign-up sheet is the most effective way to get participation. I create a simple page listing the hours and positions available along with a simple form so a parent can choose what slot they want to fill and by the time the day arrives, all the slots are filled and we’re good to go.

With our final Show-n-Sell coming up next week on Election Day, I made the necessary modifications to the online sign-up page this morning and then sent out an email to let everyone know it was good to go and they could begin signing up. I, confident that all was well with the world, headed out to golf my martial arts class.

A couple hours later, after the class, I was cooling down and on a whim decided to check my mail with my phone.

PANIC! I had 6 emails from various parents telling me they could not sign-up, that they’d tried but nothing was working.

So first, I told them it was clearly a heavy traffic issue…

Alright, I didn’t. What I did do was utilize a slick SSH program on my phone called JuiceSSH to get access to the website and figure out what the problem was. It took me about 15 minutes of debugging on the somewhat limited user interface, but I was able to fix the site and have it working again without first having to get home. Once I’d verified it was up, I then used the email app on my phone to notify everyone that things were good to go, just like a good site administrator should.

It’s likely that the fix would have waited just fine until I’d gotten home. But it was pretty cool that when I needed a way, my phone was able to provide a means for me fix a problem. Chalk one up for the smartphone.

Twitget Improvement Addendum


Awhile back, I posted a modification to the Twitget Twitter widget I’m now using to display my tweets over there on the side bar. I’ve now made some further improvements since my original changes made an erroneous assumption about processing the tweet information.

First, hashtag links were losing the leading space when being displayed in the sidebar. The fix here was trivial, as it simply requires adding a space to the to preg_replace function calls in the process_links function that deal with generating the hashtag links.

The second fix is slightly more significant. Basically, if there are no URL entities in the tweet metadata, then the code needs to find link text within the tweet and turn it into a link. Here’s the new batch of code:

function process_links($text, $new, $urls) {
        if($new) {
                $linkmarkup = '<a rel="nofollow" target="_blank" href="';
                $text = preg_replace('/@(\w+)/', '<a href="$1" target="_blank">@$1</a>', $text);
                $text = preg_replace('/\s#(\w+)/', ' <a href="$1&src=hash" target="_blank">#$1</a>', $text);
        else {
                $linkmarkup = '<a rel="nofollow" href="';
                $text = preg_replace('/@(\w+)/', '<a href="$1">@$1</a>', $text);
                $text = preg_replace('/\s#(\w+)/', ' <a href="$1&src=hash">#$1</a>', $text);

        if (!empty($urls))
                foreach($urls as $url) {  
                        $find = $url['url'];
                        $replace = $linkmarkup.$find.'">'.$url['expanded_url'].'</a>';
                        $text = str_replace($find, $replace, $text);
        else {
            if ($new) {
                $text = preg_replace('@(https?://([-\w\.]+)+(d+)?(/([\w/_\.]*(\?\S+)?)?)?)@', '<a href="$1" target="_blank">$1</a>',  $text);
            else {
                $text = preg_replace('@(https?://([-\w\.]+)+(d+)?(/([\w/_\.]*(\?\S+)?)?)?)@', '<a href="$1">$1</a>',  $text);

        return $text;

The framework here is pretty much identical as before. The main addition is the else clause in the if(!empty($urls)). The code after that is actually the previous link code- regexes like that are too persnickety to reinvent.

So this will suffice until the next problems surfaces.

Google Needs to Address This NOW


Via Instapundit, WOW. Because of Android’s backup capabilities, Google has millions of WiFi passwords and keys.

I tweeted it earlier, but this is serious enough that I felt compelled to write a post about it. It’s one thing for Google to collect information about spending habits and the like. I’m not particularly fond of it and I don’t take advantage of much of it. I do use Google search pretty exclusively and I have a gmail account as my main email account.

It’s quite another to have the capability to snoop my home WiFi because they have plaintext versions of my password. I don’t care what the law says, that’s an invasion of privacy. Full stop. If I wanted people to see what I was doing over my WiFi network, I’d never have setup the encryption. I don’t even let the router broadcast it’s ESS ID, just to make sure people have to go the extra mile to find the network in the first place. This is so serious that I’m considering ditching everything Google.

This isn’t about having anything to hide. I’d say it’s the equivalent of having a fence around your yard that keeps the neighbors from looking into the yard. I want that barrier, it’s just the way I’m wired. I’m not doing anything nefarious, I just value my privacy.

I’m not a reflexive Google hater either. I can almost see the logic behind this happening. It would go something like this:

Hey! Let’s unify everything behind a Google account to make changing devices as simple as possible for the user. But that means we’ll need all the user’s info, including passwords. Gee, that’s some sensitive info though. We’ll have to encrypt it somehow. Yeah, but how do we put it onto another device? We’d still need access to the unencrypted data so we could put it on another device. Oh well, it’s for the user’s benefit.

Think about it this way, if they’ve got passwords to WiFi networks, then what other passwords do they have? I’m beginning to wonder if Google isn’t a subsidiary of the NSA. This is the sort of thing that could seriously hurt Google in terms of customers and users. Sure, it’s a pain in the ass to switch over because of the tight integration. But I just don’t think it’s worth the price. At this point, they’re worse than Facebook.

Minor Twitget Improvement


I noticed today that the Twitter feed over there was not displaying my tweets properly. Specifically, any links are displayed using the URL structure which Twitter uses. I’d fixed this once before for the old feed, I figured it was worth investigating to see if I could fix it in the new one.

As it happens, the modification is pretty trivial, with only a few lines of code added in 1 source file.

The file to modify is twitget.php. Start by changing the function process_links to look like the following:

function process_links($text, $new, $urls) {
    if($new) {
        $linkmarkup = '<a rel="nofollow" target="_blank" href="';
        $text = preg_replace('/@(\w+)/', '<a href="$1" target="_blank">@$1</a>', $text);
        $text = preg_replace('/\s#(\w+)/', '<a href="$1&src=hash" target="_blank">#$1</a>', $text);
    else {
        $linkmarkup = '<a rel="nofollow" href="';
        $text = preg_replace('/@(\w+)/', '<a href="$1">@$1</a>', $text);
        $text = preg_replace('/\s#(\w+)/', '<a href="$1&src=hash">#$1</a>', $text);              
    if (!empty($urls))
        foreach($urls as $url){
            $find = $url['url'];
            $replace = $linkmarkup.$find.'">'.$url['expanded_url'].'</a>';
            $text = str_replace($find, $replace, $text);
    return $text;

Here, we’ve added the argument $urls, which will come from the entities field of the tweet data. This data is used to create the appropriate anchor markup, in the foreach loop. The actual link URL is maintained, while the display URL is changed to the expanded_url field supplied by the entities information. Note I’ve also modified the replacement string for hashtag searches, adding &src=hash to the href attribute in the achor tag.

Now we need to add the entity data to the function calls. Search for the process_links function within the file. There were only two instances of it used in my version. Add the third parameter to the function calls as follows:

$link_processed = process_links($whole_tweet, $options['links_new_window'], $tweet['entities']['urls'])

That third parameter should be added to every invocation of process_links. That provides the URL information to make our earlier changes work.

That’s it. Save the file and Tweets should now display the proper link text, while still linking to the URL’s as specified by Twitter’s guidelines.

A Call to Arms…


Well, perhaps a call to fingers. A case for buying Unicomp keyboards, one of the last of the mechanical-switch variety keyboards.



Just got the latest upgrade for my Nook HD. It now has native access to the Play store and I was able to bring in all my apps from my rooted Nook Tablet that I hadn’t been able to bring over as yet.

In addition to access to the Play store, it also installed the Chrome browser and a number of other apps like Maps, Facebook, Gmail, Google+, Spotify and a few others.

In short, my Nook HD is now pretty much a tablet with all the capabilities there in. Even the standard Calendar application has properly synced up with my calendars.

It’s almost certainly too little, too late. But I appreciate the move at least.

Now I’ll have to see what other goodies might be lurking.

Custom More Text for WordPress Posts


A note to the less programming savvy readers out there, this one is full of programming jargon and can likely be safely ignored. In fact, unless you’re writing a blog client, you’re likely to find this one pretty uninteresting.

For those who are interested, the rest is after the link with the custom text…

Click Here to Read More

Document Code the First Time Around


Lesson learned- for any coding much beyond a module or two, make sure you figure out a documentation method and stick to it across all modules. I’ve just spent the past several hours going through my blogtool code and fixing all those mistakes. Tedium doesn’t begin to describe the process. I can’t imagine having needed to do that for a more significant project.

Definitely a case where it pays to get it right the first time.

Fun With Numbers


Periodically, I try to take a look at our home finances to see if there’s something that can be done to find some hidden stash of money. So far, my efforts have been for naught.

One expense I always investigate is our mortgage payment. I’ve always tried to pay ahead on the mortgage to save on future interest payments. So yesterday I got curious about what the best way to pay the curtailment- at the same time as the payment or halfway through the month or some other day of the month? I could have resorted to a web page that calculates amortization tables, but what fun is that?

So I wrote some python code that can be used to generate a repayment table.

Here’s the meat of it:

Months = [31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31] 
Month = 0

def setPaymentParameters(payment, rate, day = 0):

    monthlyrate = rate / 100.0 / 12

    def _calcMonth(principal, curtailment = 0):

        def _calcADB(principal, curtailment, day):
            global Month, Months

            dim = Months[Month]
            Month += 1
            if Month == 12:
                Month = 0
            return ((day * (principal + curtailment)) + ((dim - day) * principal))/dim

        # _calcMonth code starts here...
        adb = _calcADB(principal, curtailment, day)
        interest = adb * monthlyrate
        return (principal,
                payment-interest,  # principal payment
                (payment-interest)+curtailment,  # total principal payment

    return _calcMonth

So the setPaymentParameters function returns a function that will calculate the monthly interest, principal payment and so forth for a single month. The function returned is a closure over the set monthly payment, the interest rate and the day of month a theoretical curtailment payment is made. No curtailment is necessary for the function to work.

In order to determine the effect of curtailments separate from the normal payment, the calculation uses an average daily balance method. For instance, a normal payment is typically made on the 1st of the month and a separate curtailment payment is made on the 15th. The average is calculated by summing the days the principal the post-payment level and adding the sum of the days the principal is at the post-curtailment level. Then divide by the total number of days to get the average daily balance. In the absence of a curtailment, the calculation simplifies to the prinicipal balance at the beginning of the period.

Following is an example of how to use the function:

Rate = float(5.25)
Payment = float(1000.00)

Amortization = []
CalcMonth = setPaymentParameters(Payment, Rate, 15)
Principal = float(150000.00)
while (Principal > 0):
    t = CalcMonth(Principal)
    Principal = t[6]

print len(Amortization)
interestTotal = float(0.0)
for i in range(len(Amortization)):
    print map(lambda x: format(x, ".2f"), Amortization[i])
    interestTotal += Amortization[i][2]

print format(interestTotal, ".2f")

The output won’t be particularly pretty, but it will list the total number of payments made to payoff the loan, followed by a breakdown of the effect of each monthly payment, followed by a calculation of the total interest paid. A monthly payment line will look like this:

['150000.00', '150000.00', '656.25', '343.75', '0.00', '343.75', '149656.25']

From left to right, we have the beginning principal, the average daily balance, the interest for the month, the principal paydown, the principal curtailment, the total principal paydown and finally the principal balance after the payment is applied. Each subsequent month uses this final principal balance number as the beginning balance.

The above snippet doesn’t use a curtailment payment to accelerate the paydown of the mortgage. To do that, the while loop needs to be modified slightly:

Curtailment = float(500.00)
while (Principal > 0):
    if len(Amortization) == 0:
        temp = CalcMonth(Principal)
        t = (temp[0],
        t = CalcMonth(Principal, Curtailment)
    Principal = t[6]

The modification is needed for the first payment. Since it’s the first payment, no curtailment is made, so the interest is calculated on the entire loan amount. The returned payment info needs to be modified then, manually inserting the curtailment payment. Thereafter, all calculations use the curtailment.

Here are the first couple of payment output lines:

['150000.00', '150000.00', '656.25', '343.75', '500.00', '843.75', '149156.25']
['149156.25', '149424.11', '653.73', '346.27', '500.00', '846.27', '148309.98']

The curtailment payment is included and the ending principal balance includes the extra payment. Notice the second line’s average daily balance number, which is higher than the starting principal balance. To fully understand that, first notice that the setPaymentParameters was called with the day set to 15, meaning the curtailment payment is applied on the 15th of the month, not the same day as the normal payment. Therefore, there are 15 days where the principal sits without the curtailment payment applied. Then the payment is applied for the remainder of the month. The end result is the ADB, which is used to calculate interest, is slightly higher than the principal balance after the curtailment.

The final answer to my question about the optimal day to apply the curtailment turned out to be- it saves the most money if the curtailment is paid on the same day as the normal payment. This makes sense since in general, paying earlier means the outstanding principal is reduced quicker, therefore interest is minimized.

But, that’s not the whole picture. Sometimes, for monthly household cash flow purposes, it is preferable to make multiple smaller payments. Will that result in a big difference in total interest paid? The answer there turns out to be no, it won’t. Depending on the amount owed and repayment length, the difference is only a few hundred dollars.

Design Is Not a Straight Line


I’ve recently attained a renewed interest in my blog client blogtool. A big part of that renewal is due to unfinished business- I’d alway meant to release it into the wild but had never taken the time to learn how to package it. I finally took that plunge a few weeks ago. Ever since, I’ve come up with a series of improvements, fine tunings and new ideas to make it a more capable tool and a better piece of software in general.


Release Announce- blogtool v1.1.0


I’ve just uploaded blogtool v1.1.0 to pypi.

The minor release number bump is due to switching the option parser library as well as adding the ability to process information from the standard input. The comment option has also been modified to take a couple of arguments.

I’ve added some spiffy, new web based documentation to help with getting up and running with blogtool. The documentation stuff was generated with the help of sphinx, a very cool tool that uses a different plain-text markup format that I’ll be exploring adding support for in blogtool.

Announce- blogtool v1.0.1


I’ve released blogtool version 1.0.1 into the wild.

This is a bug fix version. It fixes an error in HTML output where tags like \<img> were not being properly closed. Also takes care of stray ‘&’ characters that need to be escaped.

It also fixes some bugs in the getpost option related to converting the post HTML into it’s markdown equivalent. Nested inline elements were not properly accounted for and escaping of a number of characters was also added.

Release Announcement- blogtool


I wrote a blog client a couple years ago and have been developing it on and off ever since. One of the reasons I hadn’t done anything public with it is I needed to take the time to organize it appropriately for something like pypi.

I’ve finally taken those steps and have put it out into the wild. The source code is on github, here. I’ve also used python’s setuptools to publish it on pypi, here.

It works with my self-hosted WordPress blog and I’ve used it almost for all but a handful of the blog posts I’ve written on the blog, so I consider it reasonably well tested for those purposes. It won’t support all of WordPress features, but I plan on changing that as I migrate some of the functionality over to using more of the WordPress API. When I originally wrote blogtool, WordPress didn’t have its own API for posting, so that’s why that shortcoming exists.

There are a couple of nice features to blogtool that I thought I’d mention here. One, it uses python-markdown to mark-up post text. It’s proven very capable for my style of blogging, which is 90% text. It handles pictures as well, and I’ve added a little wrinkle for that purpose. Rather than supply a URL or some such for markdown's syntax, simply supply a file path to the picture. Then, blogtool will take care of the rest.

The other nice feature is that posts can be retrieved and edited from a blog. When retrieving, it will reformat the HTML into markdown style format. This is useful for editing comments as well as posts.

So, there it is. My first published code project.

Outlet’s with USB


It’s an outlet. NO! It’s a USB charger! NO! It’s both in one!

So the Wife found these while surfing about a week ago. I thought they were a great idea right off the bat- combining a USB plug with a wall outlet. With a few of these sprinkled throughout a home, charging all the electronic gizmo’s out there becomes a lot less hassle- all you need is the cable, no more wall warts.

But, as in many things, it isn’t all strawberries and cream. First, they are expensive. A normal outlet runs a couple of bucks whereas these things are nearly $15 and even more expensive varieties exist. Second, the outlet is large. Much larger than a typical outlet and therefore difficult to install. If the electrical box has a lot of wires in it or is too small (the documentation says it needs to be a 16 cubic inch box), it simply won’t go in the wall. The one I installed barely fit and I had to finagle wires quite a bit. Even then, I couldn’t get it to fit flush against the wall.

The third problem is one that was unexpected. First, USB is a standard interface with a complete electrical specification including the power lines that run through it. Theoretically, that means that any device that can be charged or powered through a USB cable should be able to plug into any USB hub or plug.

The reality is bit different, unfortunately. While our mobile phones and iPod shuffles seem to be just fine, the Nook devices don’t seem to take kindly to the wall outlet USB ports. On our Nook tablet, the LED on the cable seems to indicate that charging is occurring but the device itself doesn’t detect charging. On my Nook HD, the situation is even worse as there is no LED in it’s cable and it doesn’t appear to charge at all.

Still, with more electronic gizmo’s to come, I think it’s worthwhile to invest in a couple of these outlets. Call it an idea whose moment has arrived. Enough electronics seem to be compatible that these outlets make for an easy way to have a couple of ready, and easily available, charging stations.

Dealing with Unicode in Python


I haven’t touched the code for the blog client I’d written in quite awhile. This is largely because it works well for my purposes and I haven’t had the need to add further support for other features.

There has been one major shortcoming for it, however, that I hadn’t taken the time to investigate and correct. Often times, when quoting text from an article on the web, I would get a unicode decode error related to the blob of text I’d copied from the browser.

Now, I understood in general terms what the problem was: stray characters within the copied text were not ASCII characters and markdown chokes on those characters. I had an inelegant workaround that kept me from properly dealing with the problem: I’d scan the text for offending characters, typically punctuation, and replace them with reasonable ASCII equivalents. It was a pain, but it worked.

Like all workarounds, this method had limitations. Specifically, certain special letter characters like letters with umlauts, tildes, accent graves or accent aigus over them cannot be duplicated. The fact that I didn’t run into that problem a lot kept me from dealing with it quicker. Also, scanning a block of text for unicode violators is tedious.

What I failed to understand at the time was that the characters on a web page were encoded in some kind of format, like UTF-8 for example. For most of the alpha characters (those without umlauts and the like) UTF-8 and unicode are identical. The problem comes in when characters don’t line up so neatly. What I finally came to understand was that the encoded web page text needed to be decoded into unicode prior to processing. The concept seems so blisteringly obvious, now, that I’m actually perplexed as to how I never grasped it originally.

So I finally fixed the problem. Or, perhaps better put, I came up with a solution with a better set of trade-offs. Because in order to actually “fix” the problem, it would be necessary to always know how text had been encoded. Unfortunately, from the program’s perspective, it can’t be done.

But it can make some educated guesses.

Here’s the basic code that fixes the problem:

for encoding in ['ascii', 'utf-8', 'utf-16', 'iso-8859-1']:
        xhtml = markdown.convert(text.decode(encoding))
    except (UnicodeDecodeError, UnicodeError):
        print "Unexpected Error: %s\n" % sys.exc_info()[0]
        return helperfunc(xhtml)

In this case, markdown is an object for marking up markdown formatted text. Prior to passing the text to the markdown object, I decode it using encoding that represent the most likely encodings I’ll run into. If an encoding fails, that a UnicodeDecodeError will get raised, which is caught by the first except clause. That clause merely passes control back to the for loop where the next encoding is selected and tried. Rinse, repeat. When no exception is created, control passes to the else clause where normal program flow continues on the returned xhtml from markdown.

This section of code eliminates, in my case, almost all occurrences my afore explained unicode problems. But that’s because the vast majority of webpages I use are encoded using UTF-8. I’ve since added a command line option to specify the encoding to use for decoding purposes. This should provide a means to cover all other situations that arise. In this instance, when the user specifies the encoding on the command line, the user specification supersedes all other encodings and is used. The presumption is the user knows what they are doing.

The code to support that looks like this:

if charset:
    encodings = [charset]
    encodings = ['ascii', 'utf-8', 'utf-16', 'iso-8859-1']

for encoding in encodings:

The rest of the code looks identical to the above snippet.

It was a good exercise for me to muddle through, as I now fully comprehend the unicode problems that can arise and how to deal with them. The basic rules are:

  1. Decode text going into the program.
  2. Encode text coming out of the program.
  3. Use unicode for the string literals within the program.

These should help keep me out of unicode trouble in the future.

Updated SSL Certificates


Awhile back, I linked to an article that explains how to become your own certificate authority. It’s a good article and following the instructions yields the desired results. As to why I wanted to be my own certificate authority, I just felt it was a superior implementation to self-signed certificates. Once the upfront work was put in for generating the config file and the root certificate, the rest is a matter of a few commands.

Well, I was a bit naive about that last bit. I also have to at least pay attention to advances in cryptography, including whether current techniques are becoming unsecure. Turns out the MD5 hashing algorithm used to sign SSL certificates is now considered broken, more or less.

Unfortunately for me, MD5 is the hashing function the above linked resource defaults to when creating the certificates. Fortunately for me, I’m not exactly a high value target for hackers. That said, I knew my certificates were going to be expiring soon anyway, so I decided to make the necessary mods to improve my situation.

I decided to change the hashing algorithm to SHA256, something that’s seems to be considered secure for the next decade or so. In order to make that change, the openssl.cnf file that’s created needs a few modifications. It is sufficient to modify all of the md5 references in the file to sha256.

One gotcha that did trip me up, however, was that I created my new root certificate with a new, more descriptive name. So to with the corresponding private key file. This was all well and good, but I forgot to update the config file appropriately as well. In particular, under the CA_default section of the file, the certificate and private_key lines need to reflect the appropriate new file names.

As a result, I thought I had generated new signed certificates for my mail server with the updated root certificate. But when I updated the Wife’s iPad, I was getting an error that the certificates weren’t considered trustworthy. It took me awhile before I realized my mistake- I’d simply created new certificates that were signed with the old root certificates, so I hadn’t improved anything.

Now that I’ve straightened things out, things are playing nicely again and I can forget about this stuff until next year. When I’ll probably go through this all again.

Trading procmail for sieve


WARNING: Much technical jargon to follow. Those not versed in *nix style email black magic and jargon should proceed at their own risk. YOU HAVE BEEN WARNED.

I’ll state up front that my home email system has been working just fine for years now. That doesn’t mean I was entirely pleased with it, though. The main source of my angst was the use of procmail as my mail filter for routing mail delivered to me to my various personal mail folders.

Sure, there’s the maintainability of a procmail configuration file. It’s not exactly pretty to look at. There are special flags and characters galore that need to be researched every time it’s touched. There are special, obfuscated, fall-through conditions where certain processing paths are taken. In all, it’s the sort of configuration that makes total sense right up to the point where you get it working. Two days later, it might as well all be Greek. To top things off, procmail is a dinosaur, with no active development or support for the code base.

Even so, I did put the time in to figure out how to leverage it to the best of it’s capabilities and it has served me well over the years. My main bone of contention with the use of procmail in my case is it’s position as a glue component to bolt my spam filter, bogofilter, to my system’s MTA exim. In short, it’s a kludge and one that I’ve grown less fond of as time has passed.

To more thoroughly explain things, it’s necessary to mention another part of my mail system: dovecot, an IMAP server which has proven extremely useful over the years. The Wife and I both can access email from any of a number of devices; computers, tablets, phones, and so forth; from anywhere we have network access. All of these different forms of access are possible because of dovecot. As such, dovecot isn’t going anywhere. Now dovecot happens to come with it’s own filtering capabilities, provided by an implementation of Sieve filtering, and also has it’s own LDA, appropriately named dovecot-lda. It’s the presence of these 2 elements that, to my mind, make procmail seemingly superfluous because between Sieve and dovecot-lda all the functionality of procmail is possible in a more modern package.

So why haven’t I ditched procmail yet?

Here’s the problem: I use user-level word lists for spam detection with bogofilter as opposed to a global word list and Sieve does not easily pair up with bogofilter and it’s limited with regards to exim.

With bogofilter, it’s possible to either use a global wordlist for detecting spam or a per-user wordlist, each of which resides in a user’s private directory. In this way, the Wife can have spam detected how she likes and I can have spam detected how I like. While it’s possible to incorporate bogofilter support directly into exim, it seems this way only supports use of a global wordlist, which is a no-go for my situation.

Now one might presume that I could still dump procmail and just make use of Sieve to run my mail through bogofilter for spam detection. It is, after all, a filtering language. Unfortunately, it’s not possible to do this because Sieve does not support running external programs. Thus, there is no way to get it to run mails through bogofilter.

So to take advantage of Sieve, the processing has to take the following path: exim has to route the mail to an individual user, where (somehow!) it is then run through bogofilter which modifies the mail’s headers slightly to mark it as spam or not, after which the modified mail must be (somehow!) handed to dovecot-lda which will then run it through a Sieve filtering script. The Sieve script can then check the mail for spam and place it in the appropriate mail folder.

As hinted at, the bugaboo has been how to get exim to hand the mail to bogofilter so it can use the user’s word list for spam detection and then pass the resulting mail to dovecot-lda.

It turns out to be possible with the help of exim‘s support of .forward files, as well as a little helper script.

To make it work, start by enabling the Sieve plugin in dovecot. Do this by editing /etc/dovecot/dovecot.conf and adding the following configuration:

protocol lda {
    mail_plugins = sieve

(The ‘…’ characters just indicate the possible presence of other lines in within the brackets. They shouldn’t actually be in the file.)

Once this is done, restart dovecot however appropriate for your system. On debian using the /etc/init.d/dovecot restart incantation works nicely. Out of the box support has now been created for a ~/.dovecot.sieve file.

Next, create a .forward file for exim as follows:

if error_message then finish endif
pipe /home/user/.forward-helper

Now create the file /home/user/.forward-helper as follows:

/usr/bin/bogofilter -u -e -p -d /home/user/.bogofilter/ | /usr/lib/dovecot/dovecot-lda

The one thing to check on in these commands are that all of the paths are correct. The path following the -d should be the path to the bogofilter wordlist. Similarly, make sure that the path to bogofilter and dovecot-lda are correct for your system. In both cases above user should be substituted with the appropriate username.

What will happen now is that as after exim figures out which user to rout mail to, it will run that user’s .forward file. The file is setup as an exim filter file and will pipe the mail to the script .forward-helper. That script takes care of running the mail through bogofilter and then handing off the resulting mail to the dovecot-lda. The helper file is necessary because of the multiple pipes. While it is possible to run the mail through bogofilter directly from the exim filter file, the result cannot be grabbed for further use, like to pipe to dovecot-lda. Thus, the helper file takes care of that for us.

At this point, all mail will start showing up in your INBOX (I’m assuming use of maildir here). For a start, here’s how to separate out spam, ham and unsure mail messages using Sieve:

require "fileinto";
if header :contains :comparator "i;octet" "X-Bogosity" "Spam"
    fileinto "spam";
elsif header :contains :comparator "i;octet" "X-Bogosity" "Unsure"
    fileinto "unsure";

Place this snippet into a file named .dovecot.sieve in the user’s home directory. Now, spam will go into a mail folder called “spam”, mail that can’t be classified goes into a folder called “unsure” and the rest will go into the user’s INBOX. Please see RFC3028 for a detailed explanation of how the above works as well as how to further filter mail.

The solution seems somewhat trivial, but as a non-sysadmin lacking decades of experience working with email systems I can say it’s taken me quite awhile to figure it out. Initially, I searched high-and-low for someone else who had done this, to no avail. Then I had to become somewhat steeped in the machinations of exim to figure out how to make it work. In all, it’s a satisfying solution and the new Sieve scripts are much easier to understand and maintain. So long to procmail.

Readability Bookmarklet for Dolphin


I’ve been using Readability more and more recently. I’m not exactly an early adopter in this case, but I’m glad I found it. It’s an app and service that reformats an article into a very easy to read format. It eliminates all the cruft like ads, banners and so forth that accompanies the typical web article.

There are a couple of way it can be used. One is as a “Read Later” service, where an article is saved to a Readability account. The idea being that the article can be accessed through the Readability application on a smartphone or tablet or computer browser.

The other way to use it is as a “Read Now” service, where the article of interest reformatted on the push of a button. These buttons are available easily enough from their website for desktop browsers like IE, Chrome or Safari. Unfortunately, they don’t have anything for mobile browsers.

Fortunately, someone figured out how to create a bookmarklet for a mobile browser to achieve the same affect.

I’ve been using the Dolphin Browser on my Nook and I am happy to report that the steps explained at the link work perfectly. Basically, go to the link using Dolphin; copy the Javascript code; create a new bookmark and paste the code into the URL box; finally, save it as a “Read Now” or some other suitable name. To use it, simply tap the bookmark when an article comes up in the browser that you’d like to read.

Trust me, you’ll be happy you did.

Go to Top