The most disappointing gadget of 2005: the Nokia 770 Internet Tablet

I had been excited by the prospects of the Nokia 770 from the moment I read about it on Planet GNOME. An internet tablet that ran Open Source Software and used GNOME/GStreamer bits. It was hard to not be excited. The early reports from the people who received developer units was promising. Software was being ported and written, and things seemed to be progressing by all accounts.

Nokia 770

The Nokia 770 was finally released, but only available online. I waited patiently for them to appear locally. Jorge finally spotted one in the wild, at a CompUSA in Michigan. I braved a snow storm and headed out to my local CompUSA and picked up the only one in inventory. I was almost giddy when I got home and plugged it in to charge. And then I used it.

The review Eric wrote for Ars Technica sums up my feelings on it nicely. I really wanted to like the 770. It had the potential to be a great device but severely fell short on expectations. The hardware seems underpowered, with the lack of RAM crippling the performance. Beyond that, the software itself was buggy — even for a first release. I could forgive the occasional glitch or two and wait for an update but the persistent issues with the UI — slow visual response to operations, applications crashing or refusing to start without restarting the device and the minimal working configuration options made it a profound disappointment.

Apparently I’m not the only one to return the Nokia 770, either. When Jorge returned his, the manager came to talk to him. He wanted to know if it was really that bad, because his store had seen a 100% return rate on the device. Let that sink in for a minute: every single person who purchased the Nokia 770 at that store returned it. That doesn’t bode well for a future revision of the device to address the flaws in this virgin release. Nokia had a great idea, but the poor execution leads me to proclaim the Nokia 770 the most disappointing gadget of 2005. Better luck next year, guys.

Bug/Patch-a-week challenge

I had breakfast with Jon Trowbridge (of Beagle fame) last weekend. One of the things we talked about was contributing to open source. It was a refreshing and eye-opening conversation. People want to contribute but don’t know where to start. There is this vast amount of software, plenty of bugs or missing features, but where do you begin?

We don’t have enough time in the day to do all of the things we’d like but we tend to waste a minute here or there reading Penny Arcade or Google News or whatever. By themselves they don’t seem like much but add them up and you might be surprised how much time you’ve wasted.

So, I issue a challenge to everyone on Planet, one that doesn’t require the ability to code (but if you possess such ability, use it). Pick an application you commonly use and find it’s bugzilla. Bookmark it if necessary. Once a day pull it up while you’re surfing and skim through it. See one that looks familiar or interesting? Confirm it, reproduce it, and add details to it. It helps confirm that there is a problem and helps to track down the bug. Developers appreciate this, trust me.

If you can write code and you see one that you think you can fix, give it a try. No patch is too small. If you fix it, attach the patch and add yourself to the CC list. You’ll get a nice notification when someone responds to your patch and you can feel good about yourself for contributing and maybe fixing something that’s been annoying a user for months.

For as little as five minutes a day (okay, so I sound like Sally Struthers) you too can be a part of what makes open-source great. See your name in lights (or just plain text).

Changelog:
2005-10-25 Larry Ewing

* src/FlickrRemote.cs: Integrate patch from Adam Israel it quote
tag names with spaces in them.

Ordinary users to complain when something doesn’t work, rightly so. Be an extra-ordinary user and do something about it.

High performance mod_perl2

I’ve been spending a lot of time porting over some code from ASP.NET to mod_perl2. Along with that rewrite, I’ve been migrating a side business away from managed hosting running Windows 2003 to Linux. Between the porting and migrating, managing the clients and working to grow the business, I’ve had time for little else.

I’ve discovered some pretty cool things along the way, though.

Apache2’s prefork model seems to work much better than its worker model, specifically when interacting with mod_perl2. Gathering specific statistics is difficult so this is strictly empirical data. The worker model worked just fine during testing and initial testing. Once I started pushing some real traffic through the system (on the order of 1M requests/day) I began noticing some odd behavior. Internal to my application, I track how much time it took to complete each request, from start to finish. Those times remained consistent throughout the process, but when I tried going through Apache it took a minimum of 20 seconds.

Something was obviously wrong so I started eliminating possible bottlenecks, like network, processor, and memory. All of those checked out fine. I ran ngrep (one damn fine tool, btw) and watched the request. It was hitting Apache, hanging for a while, and then spitting back a response. So I tested against a static file — it was fast. I wrote a small mod_perl2 handler that did nothing but return OK. It hung. So, I figure that something with the request being handed off from Apache2 to mod_perl2 wasn’t working right.

I ran Apache2 through strace but I didn’t see anything enlightening. I googled, read, and googled some more. I tweaked all of the worker mpm settings with no success. There were plenty of processes waiting for connection but no apparent reason for the requests to be delayed. Then I remembered someone on IRC asked me about prefork vs. worker. I had thought, because it was the default when I apt-get installed mod_perl & Apache2, that the worker model was the one mod_perl2 preferred.

So on a hunch I removed the worker mpm in favor of the prefork one. I had to tune the prefork settings a bit from the default. Suddenly everything became responsive. No processes sitting in “sending reply” and lightning fast response times from Apache. Memory usage seems to be lower and almost nil processor usage. It’s working so well, in fact, that I’m afraid to go to bed for fear it’s just my imagination.

nautilus-wallpaper

I was chatting on irc and organizing my wallpaper when I realized that there was no way to set my background image directly through Nautilus. I cracked open Anjuta and got to work.

Here is the result: nautilus-wallpaper

I still need to wrap my brain around autoconf, automake, etc. I did manage to hack the Makefiles so that the extension is installed to prefix/lib/nautilus/extensions-1.0. Hopefully there aren’t any other quirks with that.

Where am I?

Just a little bit of code I’ve been working on. Call this a proof-of-concept to test the geo-targetting library that I’m using. Taking that (a commercial product) and combining it with the Google Maps API, I’ve come up with something kind of fun:

Where am I?

It’s not 100% accurate and it only goes to the city level but it’s close enough for most purposes I need it for. It’s some pretty interesting technology, too. My next step is extending it to not only target the city you (or your ISP) is in, but other cities around you in a certain mile radius.

Benchmarking Apache

I’m writing a new web application using mod_perl 2.0. It’s heavy on network I/O so I’m doing some benchmarking and testing with simulated I/O to see just how many requests/second I can expect a single server to handle. While reading through Practical mod_perl I discovered one of the greatest tools ever: ab.

stone@moradin:~ $ ab -n 5000 -c 10 http://localhost/echo
This is ApacheBench, Version 2.0.41-dev < $Revision: 1.141 $> apache-2.0
Copyright (c) 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright (c) 1998-2002 The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 500 requests
Completed 1000 requests
Completed 1500 requests
Completed 2000 requests
Completed 2500 requests
Completed 3000 requests
Completed 3500 requests
Completed 4000 requests
Completed 4500 requests
Finished 5000 requests

Server Software: Apache/2.0.54
Server Hostname: localhost
Server Port: 80

Document Path: /echo
Document Length: 33 bytes

Concurrency Level: 10
Time taken for tests: 2.326156 seconds
Complete requests: 5000
Failed requests: 0
Write errors: 0
Total transferred: 1145916 bytes
HTML transferred: 165132 bytes
Requests per second: 2149.47 [#/sec] (mean)
Time per request: 4.652 [ms] (mean)
Time per request: 0.465 [ms] (mean, across all concurrent requests)
Transfer rate: 481.05 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 0.7 1 8
Processing: 2 2 1.2 3 10
Waiting: 0 1 0.9 2 8
Total: 3 3 1.2 4 11
WARNING: The median and mean for the waiting time are not within a normal deviation
These results are probably not that reliable.

Percentage of the requests served within a certain time (ms)
50% 4
66% 4
75% 4
80% 4
90% 4
95% 5
98% 7
99% 7
100% 11 (longest request)

This is awesome. Not only does it rock, but it’s included by default with Apache.

Detroit Hackfest Redux

I’m back home and more or less caught up on sleep after this weekend’s Ubuntu Detroit Hackfest. It’s a nearly six-hour drive each way but it’s worth it to hang out with everyone occasionally. We ate good food (thanks to Kattni’s wicked cooking skills) and even did a little keysigning. Thanks again to Kattni for letting everyone crash at her place. Saves money on a hotel, which makes it much more likely I can attend in the future.

In the end, I finally decided to start hacking on Tomboy. Step one is to make it easy to find and connect to other Tomboys on the network.

Thanks to Charlie, who told me about the GnomeVFS bindings for Zeroconf aka Rendezvous aka Bonjour. If you don’t know what that is, in a nutshell, it lets you publish what services are available on a machine. You could, on your mail server, use Zeroconf to announce that you have smtp, imap, and pop3 available. It’s a way to make shit easy when connecting people together.

I’ve started working on Mono bindings for the Zeroconf stuff. I have some of the calls working. Others I’m having some difficulties with and I may need to turn to the gtk-sharp-list group for some advice. One particular struct is giving me headaches

I’ll get the code posted up here soon and hopefully get some working bindings released in the near future. Once the binding is done I can start to do the actual integration work in Tomboy, which will be exciting.

After a while the code began to blur and progress grinded to a halt. Still, I made lots of headway and got to meet some cool new hackers. Hopefully I’ll be able to make it again in the future. After I got home I realized that I never did leave Andrew any money for the food we consumed. Sorry dude! I’ll make it up to ya when you’re in Chicago next month.

Detroit Hackfest

I’m heading to Detroit tomorrow to hang out with Jorge and the rest of the Detroit crew. A weekend of hacking on code and eating unhealthy food. Just my idea of relaxation.My only dilemna is what to hack on. Here are the current candidates:

tomboy – Adding network support so that you can access and search notes across multiple machines. Ideally advertised with rendezvous. No more wondering which machine you left a note on.

libnautilus-pr0n – Add some new functionality to my nautilus media-sorting extension, like exif tags and perhaps video support.

gnome-launch-box, a very cool launcher. I’ve patched it to work with Ubuntu Breezy. There’s still work to be done to improve the performance. I still don’t know if this is actively being maintained. I haven’t been able to get an answer from the projects maintainers.

Porting Expresso to Linux/Mono. Expresso was originally released on the Code Project. I’ve talked to the author, Jim, and he confirmed that I’m free to port the original code to whatever I need. This would be a very useful tool to have in Linux.

I know Jorge and n0p are interested in gnome-launch-box. Andy is excited about porting Expresso. I want to do them all but I know that’s not logistically possible. I guess I’ll let peer pressure decide for me.

Wifi for everyone — Penguicon-style

I rolled into the hotel for Penguicon around 2pm today. Got checked in and inquired about wifi access. Last year they had arranged for free wifi access for the weekend. This year, however, the hotel is only offering us $5 coupons per day, for 24 hours of access. When the clerk was getting my coupon, I glanced at the stack and noticed that some of the numbers were the same. I promptly setup a ad-hoc wifi network so that the rest of the gang could use the internet.

Later that night, when Kyle got his coupon we confirmed that the coupon numbers are identical. Ineptitude rules.

Flickr and f-spot

I went to export a picture from f-spot to flickr this morning. The export failed and reported a problem logging in. Naturally, I double-checked my password and tried it a few more times without luck. Flickr had recently made some changes on their end, so I killed f-spot and fired it up in a terminal. Sure enough, somewhere during the login process it was throwing an integer overflow exception. Flickr is just too cool to be held back, so I grabbed f-spot from CVS, found the bug, wrote a patch, and saw it get committed this morning.

It’s a good start to the day so far. A meeting with a client I expected to last more than an hour took all of 10 minutes and I feel pretty good considering I was up until 2am trying to downgrade my laptop from Breezy to Hoary (and subsequently repair udev) to troubleshoot a smbfs issue.

Hopefully this bodes well for my productivity at Penguicon. I’m going to be hacking on some Mono apps all weekend, if all goes as planned.