Should you get an Amazon Echo ?

The Amazon Echo is a high-quality device in both its form and function. For my interests in the Echo, I thought that the special price of 99$ for Prime Members was a bargain. For others I think it’s going to have to do a lot more, especially to justify the intended regular price of 199$.

I’ve been using my Echo for over a week and I like having it around, but should you get one at 99$ ? Well, if what you’re looking for in the Amazon Echo is:

  • A personal assistant
    Right now the Echo has limited capabilities which consist of a small set of actions for a handful of categories – music, lists, information, weather or time. If you like technology and during the day you make lots of searches/queries in at least one of those categories then I would consider it. Otherwise, the Echo’s novelty is likely to fade quickly.

    Simply determine if any of your most-accessed mobile apps fit into the categories above. I find that I use Echo mostly for its music and news broadcasts and often ask help with weather, my shopping list, and timers. Similarly, I already have corresponding mobile apps which I use on a daily basis.

  • Bluetooth speaker
    The Echo works really well as a Bluetooth speaker and has no trouble filling a large room with good quality sound. Playing bass-heavy music, like hip-hop or dance music, on the highest volume didn’t produce any obvious distortion. But even at 99$, I think that purchasing the Echo as just a speaker is a hard sell considering the likelihood another one on Amazon could outperform Echo for half that price.

    However, if the prospect of having the additional personal-assistant functionality is remotely interesting, then I’d consider it. What you’ll experience at least once when you use Echo, like when setting up your Bluetooth connection, is how easy it is to use technology when your voice becomes the UI.

    I’ve never owned a Bluetooth speaker before, but now I use Echo to play audio from my iPhone or iPad when using apps like Pandora or Newsy. I prefer the enhanced sound quality, plus it’s cool that I can still control the volume with my voice.

  • Tech street-cred
    If you’re a technophile, a hard-core developer or someone that likes to play with cutting-edge technology then the Echo is for you. It’s already a very robust piece of technology which offers a glimpse of the future and will help you to re-imagine personal computing. However, to ensure a lasting relationship, refer to the guidelines I mentioned above in Personal Assistant as they still apply… at least until Amazon releases a dev kit.

I can envision a day that this type of technology plays a huge part in personal computing, but there’s a long road ahead before we’re there. So if you’re still on the fence about the Echo then you should probably sit out for awhile and wait until the technology matures.

LightCharts – Lightweight charts for Flex



A lightweight line-chart library for Flex.


I created LightCharts for a project of mine that involves tracking many stock market symbols. Originally I was using the Adobe Flex Charting library, but found that performance suffered greatly given the number of charts I was using (around 60+), the amount of data displayed, and the constant real-time updates. Of course this isn’t necessarily due to poor coding on Adobe’s part, it’s just that their library contains an amazing amount of features to handle a variety of needs, consequently it’s very heavy. I didn’t need a lot of features, just a nimble way to display data.

I searched the Internet for other charting libraries which I could use and stumbled across a fantastic set of components created by Keith Peters called Minimal Comps. Keith’s library is extremely lightweight and it would have been a good fit, but MinimalComps is geared towards the pure Flash environment and not Flex – a major issue being the disparity in the component lifecycle.

Nonetheless, I was inspired by his code so I decided to use it as a starting point, adding and changing what I needed along the way.

Continue reading

How to build and install the Metakit DB for Python on MacOSX

Ever since I used it years ago on a geek-project for my Zaurus, the Metakit DB has always been a favorite of mine. I had the chance to use it again on another personal project and this time on MacOSX. Unfortunately, the prebuilt binaries on the Metakit site are for older versions of MacOSX, so I had to build it myself.

Normally one would simply follow the “Metakit installation instructions”:, but they are old and didn’t work correctly with 10.5 Leopard. I scraped enough information together from the Internet to get it working, but I had to do a lot of research. To save others the same hassle, I have put together all of the changes and put them here in their entirety:

Building Metakit

Make sure you have Xcode installed on your system before starting.

Get the latest source from the “Metakit downloads page”: At this time the latest version is @metakit-

Uncompress the archive in a work directory and run the following commands:

Note: Your Python install might be in a different location. If so, give the @–with-python@ arg the proper value.

“Fat” binary setup

If you need this to run on the PPC architecture you will need to make a couple of modifications to @./builds/Makefile@ after running @configure@, otherwise you can skip this step and build the binaries with @make@.

Find @CXXFLAGS = $(CXX_FLAGS)@ and change to the following:

Find @SHLIB_LD = g++ -dynamiclib -flat_namespace -undefined suppress@ and change to the following:

Build the binaries

Run your typical @Makefile@ commands:

Installing Metakit

Rename the shared library which is now in the @./builds@ directory:

And copy the following files to @/System/Library/Frameworks/Python.framework/Versions/2.5/Extras/lib/python@ (be sure to adjust the path name for your version of Python):

Testing Metakit

At this point you should have a working system and ought to be able to run the following command in a Python shell without issue:

Enjoy !

h3. Resources

  • “Metakit for Python website”:

  • “helpful instructions from”:

My First Document Scanner

Today I ordered a NeatScan document / receipt scanner made by NeatCo!

I’m pretty excited because I’ve always wanted one ever since the days of the Visioneer Paperport. The idea of a paperless office really appealed to me mostly due to my infatuation to keep around receipts and documents that I receive, be it electronically or on paper. Unfortunately, I’ve never figured out a method to process & save any of the paperwork on an ongoing basis – it usually falls apart when I start to get too busy with life.

I had done a little research several months ago to see if there were any changes in the document-scanner market since the last time I checked. From that I made my rough list for my ultimate document-scanning setup:

  • Small, desktop scanner
  • Ability to create searchable PDFs
  • Software to help me organize everything

I started looking again because I came across two Black-Friday deals related to document scanners: NeatScan to Office by NEAT and the SnapScan S300M by Fujitsu. (The deal from NEAT was for the PC version of the scanner, but I read on their website that they were offering the Mac software for free as long as you provided the serial number of the scanner.)

I found that Fujitsu’s SnapScan always had great reviews so it probably wouldn’t be a bad choice no matter what. One of the cool features it has is the ability to feed and scan multiple documents while scanning both sides at the same time! Very nice.

NEAT released a Mac version of their software last January at MacWorld. From what I could tell from the NEAT website and forums, they are committed to creating a great Macintosh experience and their goal is to match the functionality of their Windows software. One of the cool features of the NEAT software enables you to print directly to the software (eg, print an email receipt from Amazon to the software, which will then process it with OCR/PDF). This could definitely be handy for me since I get a lot of electronic receipts from the things I order online.

Here are some of the reasons I decided on the NEAT scanner:

  • It was a little cheaper than the SnapScan
    I wasn’t sure if I was going to appreciate a document scanner
  • The included software did OCR (important for making searchable PDFs)
    (Though during this quarter [2008Q4], Fujisu is offering a rebate to get a free copy of Readiris Pro OCR sofware.)
  • The included software will organize receipts and documents
    (I was considering purchasing Yep as my tool for organizing PDFs; I always have the choice if I find the NEAT software lacking.)

h3. Resources

  • “MacNN Forum: NeatReceipts or Fujitsu?”:

  • “TUAW Review: NEAT Receipts for Mac”:

  • “MacMost Video: NeatReceipts Review”:

  • “Macworld Review: Fujitsu SnapScan S300M”:

  • “ SnapScan S300M”:

  • “Readiris Pro Website”:—Product-list.aspx

  • “Pricegrabber Review: Readiris Pro 11”:

My predictions about the success of Apple’s iPhone

After digesting all of the stories, videos, blogs and comments on the ‘Net, I make my predictions about the success of the iPhone.

Here we are on the eve of what is arguably the hottest product launch in the history of computing – the Apple iPhone. Over the past few weeks the amount of buzz surrounding it has reached frenzied proportions! As usual, I read the same reports on the ‘Net typical of an Apple product launch – “Apple is great!” “Apple is going to fail!” “Apple did it all wrong!” “It’s too expensive” “I want one!” “No one is going to want one!” “The iPhone doesn’t do <insert favorite technology or feature here>”. So, I’m here to add to the insanity and give my take on the success of the iPhone, but through a different perspective. In short,

Apple is going to sell millions of iPhones … and this is just the beginning.

Why will it be so successful? Apple expertly markets and sells technology catered to the largest section of consumers who are, based on their technology needs and knowledge, simply the average person. Most companies (try) do this, too, but always fail to excite consumers in the way Apple does. Their secret sauce is one thing – the user experience – and it permeates the entire experience, from marketing to packaging to purchasing to product use, etc. It is this user experience which fully resonates with the majority on so many different levels.

! Graph of Moolah)!:

click on the image for a larger view

The market and the majority of which I speak are best described in the book “Crossing the Chasm” by Geoffrey Moore. Graphically, his view of the market is a typical bell-shaped curve that is split into five different segments of consumers, each with their own technical needs, motivations and comfort levels. The names of these groups are the Innovators, Early Adopters, Early Majority, Late Majority, and the Laggards. By far the two largest of these consumer groups, and the ones to target to be extremely successful, are the Early Majority and the Late Majority. And, it’s interesting to note, it’s not necessary to target these consumers with the best technology[1]!

No, Apple doesn’t always compete using the latest technology, though whatever technology they choose they make it easier to use and more beautiful than any existing product. With the iPhone they have once again upped-the-ante by bringing to market the world’s first consumer multi-touch display which uses hand gestures to navigate the phone’s menus and tools. The screen is gorgeous, the graphics are adorable, and people will have fun using this phone.

It’s apparent that Apple put a lot of research and development into the interface and they will need to recoup their costs. Thus paying a premium for this device shouldn’t be unexpected (though it is actually not that expensive when compared to other smart phones on the market today[2]).
Initially price won’t be an issue because the first purchases will be made from those in the market groups called the Innovators and Early Adopters. These people are willing to spend extra money for something cutting edge, but to be fully entrenched in the latter groups the price must fall to within their budget (as did the prices of the iPod). I expect the first price drop to happen in 6-12 months with the second revision.

Of course there are things which might hamper the iPhone’s success:

  • Manufacturing defects
    I think Apple is poised and ready to deal with any issues which come up.

  • Usability
    Notably the keyboard and battery life, though I’m sure Apple has done a lot of testing. Initial reports are positive.

  • AT&T service
    This isn’t really under Apple’s control, but they have taken over part of the process, notably activation of the phone which will be done through iTunes!

It’s obvious that Apple has worked extremely hard getting the iPhone to market. From outside appearances, the iPhone execution has been superb, especially considering all of the components Apple has brought together within a few short years.
The early summer release will give Apple time to work out the initial bugs and give the market opportunity to see how cool the iPhone really is. Fast forward six months and Apple will once again be selling the most desired technical gadgets during the Christmas season.

Given the new touch interface, the iPhone is the most revolutionary mobile phone / Internet device / music player the world has ever seen. Apple will sell millions, make billions, and fully deserves to do so!

fn1. Two notable examples are the Apple iPod music player and the Nintendo Wii game console. Neither product incorporates the best technology on the market, but they are both one of the most desired items in their respective categories. The popularity derives from their user experience.

fn2. Check ebay for the Nokia N95. (At the time of this writing, the Nokia 95 was selling for ~650$US)

Ditching Zope and Plone for simplicity and creativity!

To be fair, I had found a theme plug-in that was a template which could be used to create your own theme. I knew that I could do it, but also knew that it was going to take time. For the photo gallery, I couldn’t find exactly what I wanted, but knew that it was a combination of my old photo gallery (which didn’t run on the current Plone) and two other gallery plug-ins I had found.

I was dishearted because I knew that to find a solution to the data issue was going to take even more time; time I didn’t really want to spend. The theme was going to take time, the photo gallery was going to take a lot more, but this data problem could be the grand-daddy of all timesinks. I weighed a lot of factors in my decision: from the cost of hosting, to the flexibility in changing user interface, available themes, etc.

The decision to switch to an application running on my desktop means that I can no longer edit my site anywhere in the world using a web browser, but I really never did anyway. And, because the sites created by RapidWeaver are static and not dynamic, I can put them just about anywhere, including a server residing at home.

I think that Zope and Plone are amazing pieces of software, but together they comprise a full content management system and they were overkill to run my small web site.

Eeny, meeny, miny, moe, which DVCS should I let go?

A critical tool in a programmer’s chest includes a version control system (VCS) to keep track of software changes. I was initially going to install the Subversion VCS for my new projects, but decided to look around for other alternatives. Over the past couple of days I have found out that there is a lot of activity going on in this historically stagnant part of computing.

My search for alternative version control systems (VCS) started after I had successfully hacked my NSLU2 network appliance so that I could install on it a very popular open-source VCS called Subversion. I managed to get everything installed, however, during configuration I began to have doubts about using it. There were more setup options for Subversion than I wanted to mess with and, with every one, I had to go a little deeper to fully understand how the system worked so that I knew how I wanted it to work. I had been hoping for something more plug-n-play because I’m probably going to be the only person using it. Thus began the hunt for other alternatives.

While poking around the ‘Net, I had run across several different open-source projects focusing on version control, but didn’t really think much of them. Of the names I did see, none stuck out like CVS and Subversion; after all, they were the only real version control systems, right?

Guess again! As I dug deeper I was surprised to find that there is a lot of activity going on in the world of version control. I found over ten different projects all involved in developing what you might call a “modern” version control system! Some of them are Bazaar-NG, DARCS, Monotone, Arch, Git/Cogito, Codeville, Mercurial, and SVK.

The main idea behind many of these new systems is to change the repository model from having a central repository and server to a distributed one where every “checkout” of the source code can be its own repository and server, thus imparting the name of distributed version control systems (DVCS). One of the interesting aspects of these new systems is that they give software developers more flexibility in how they can share code amongst themselves.

Of the projects which I found, I’ve narrowed my options to three: DARCS, Mercurial, and Bazaar-NG. I’ll tell you more about what I decide later, but I will leave you with some of the important pages that I read.

h3. Resources

Project sites
• “”:
• “”:
• “”:

Thoughts from others
• “”:
• “”: (don’t miss the comments)
• “”:

• “”:
• “”:
• “”:

A report card of AJAX/DHTML platform compatibility.

I’ve been checking out more of the AJAX/DHTML libraries which have been released lately. In the back of my head I have a nagging question about their cross-platform compatibility. I certainly don’t want to use anything which isn’t cross-platform, because that smacks of the browser wars from long, long ago.

Luckily I’ve found someone who has already worried about this and put the effort into finding out. The article also talks about the history of dynamic HTML on the Web and the battle between Microsoft and Netscape.

A big thanks to musings from mars for creating the AJAX/DTML Report Card

When did Linux become mainstream?

I read an “article on”: which talks about perceived changes in the culture of It blames the widespread adoption of Linux as the culprit which shooed away the hard-core geeks from the site. Of course Linux adoption was a slow, gradual process brought out by a series of events in the computer world. The article gives several examples including IBM embracing Linux at the LinuxWorld Conference & Expo in 2001 and a congressman from Venezuela telling Microsoft to drag its underhanded business practices elsewhere because he was going to adopt Linux for use in his country.

I personally think the shift started back during the United States vs. Microsoft case. I can’t remember, but I think it was Ballmer who stated, “We’re not a monopoly, because there’s that Linux thing.” Well, that statement unleashed the curiosity of the American media, gave Linux plenty of free press on national television, and launched it into the mainstream.

Why? Linux had been around for almost a decade, existing only as source code passed around the Internet between only the most hard-core of geeks. But now Aunt Helen, located in a little town somewhere in the Corn Belt of America, found out about Linux by watching Fox News even before she was able to get broadband.

This is one of the few times I will say, “Thank you, Microsoft”.

Setup MarsEdit with Quills 0.9 Final

I have been wanting to find a better way to write blogs rather than using the Plone interface. I was happy to find that MarsEdit and Quills both implement several blog APIs. Though it would appear that the implementation for both the BloggerAPI and the MetaWeblogAPI are not complete, MarsEdit can still be used to post articles using the BloggerAPI.

Here are examples of necessary configuration:

The tricky part is obtaining the Blog ID. Plone uses an immutable identifier for all objects so that they can be found even after renaming them. Quills in turn uses this ID to get at your weblog object.

In order get this ID you’ll need to login to your Zope Management Interface (ZMI) and go digging through the uid_catalog in Plone. When you have clicked on the catalog object, click on the tab labeled Catalog. Find your blog object in the list of results and click on its link.

Search on the resulting popup page for the key named UID. The value of this key is what you will need to use for the Blog ID.

Note: Because of errors in the implementation of the MetaWeblog API, you will see a Zope error after posting, however the post will be correctly saved.