Thursday, December 20, 2007

Decrementing vs Incrementing

A while back, I had read some things about how decrementing loops in javascript were faster then incrementing, especially in IE. (see http://www.moddular.org/log/javascript-loops) The main point being that it is faster to compare to 0 then any other number. As a result, I had implemented some decrementing loops in my own code (loop unrolling seemed too heavy handed) but haven't been too compelled to make a habit of it. Today I decided to investigate this again, since I had some lingering doubts about if any of this made a difference at all... So, here it is.

My test code consists of the following:

var limit = 1000000;
function testi(){
var n = 0;
for (var i = 0; i < n =" i;" n =" 0;" i =" limit;"> 0; i--)
n = i;
}

time1 = new Date().valueOf();
testi();
time2 = new Date().valueOf();
document.write("time incrementing (" + limit + " times) = " + (time2 - time1) + "
");

time1 = new Date().valueOf();
testd();
time2 = new Date().valueOf();
document.write("time decrementing (" + limit + " times) = " + (time2 - time1) + "
");


Fairly straight forward. I can change the limit value and check to see what the differences are for different size loops. I compared in Firefox 2 and IE 7. And I found the following.

  1. At a million loops in FF and IE, decrementing is usually twice as fast as incrementing (~220ms vs ~440ms). But how important is a 200ms baseline difference when you are doing a loop a million times? Presumably, whatever you are doing inside of that loop will most likely eclipse the difference.*
  2. If we decrease the loop to 100,000, the difference in IE becomes ~20ms decrementing vs ~40ms incrementing. And Firefox, surprisingly, shows the same results.
  3. At 10,000 loops, Firefox wavers between 0 and 10 ms for either incrementing or decrementing. As does IE.
  4. At 1,000 loops, we are in 0ms land for both IE and FF.
So, we see that at a million, decrementing saves us some time -- especially in IE. When we drop down to 100,000, the differences between the browsers disappear, and while decrementing seems to be twice as fast, the ~20 ms difference is tiny in the grand scheme of things. At 10,000, the differences incrementing and decrementing in both browsers disappears entirely.

One problem with decrementing loops I have found is that they don't behave the same as incrementing loops. If you are looping through an array, you end up going through it backwards. This is sometimes a problem, if for example you are executing a string of functions stored in an array, and you want to ensure that the first one in is executed first. A simple way to fix this, is to find the index by subtracting your decrementing index from your limit. But does putting this simple bit of math into the loop corrodes any performance gains we made by decrementing in the first place? To find out, I substituted the n = i; assignment in the decrementing loop function with a n = limit - i; assignment. I found that:

  1. In Firefox (at 1 million loops), the cost of decrementing rose to ~550ms vs ~440ms for incrementing. Erasing any gains and adding cost. In IE, the time for decrementing increased beyond the time for incrementing.
  2. Below 1 million loops, decrementing lost all of its advantage (at best being equal) when the compensating math was implemented.
In conclusion, it seems that decrementing loops are more efficient, but only when the loops are extremely large. This advantage pales next to whatever else you might be doing in an extremely large loop. Above all, decrementing is counter-intuitive and requires compensating math when used with loops that are order dependent (or might be order dependent), and this compensation nullifies any gains that decrementing gives in the first place. As with many optimization tricks, it seems that the cure is worse then the symptom.

*Strangely, in Internet Explorer only, if I reverse the order of the loops and execute the decrementing function first (at a million loops), decrementing takes ~210ms and incrementing takes anywhere from 900 to over 2000 ms. This is a big difference, and I haven't been able to figure out the source of the disparity.

Wednesday, December 19, 2007

Always Eat Alone (sometimes)

A friend of mine recently read the book Never Eat Alone, and since he's been singing the praises of networking. Yes, networking is important.

I have always seen the value in networking, but I get a little cranky when networking gets all of the attention at the expense of well... being alone. Being alone is really important.

I recall a networking buff extolling the virtues of practicing small talk at every possible opportunity, like when your in a cab. When I'm alone in a cab with a non-talkative driver, I see it as an opportunity to think and reflect. Maybe I am just a misanthrope, but I actually like, and require, time alone with my thoughts.

Since I began working in France, one of the customs that has stuck out is lunch. In the office I work in, everybody takes about a full hour for lunch, and almost always in a group. In my office in New York, it is normal to eat alone at your computer in about 10 minutes, eating socially is the exception for most.

Part of the difference is due to the fact that in France, lunch is generally the main meal of the day -- so having a sandwich for lunch would be like having a sandwich for dinner in the US. Another reason, is that in France, eating is considered a social activity and not just an unfortunate biological function. I have to say that, in general, I am much more partial to the French attitude then the American.

However, I still do miss eating alone sometimes.

Anybody who does creative work (which I count software development as), must find a balance between being alone and being social. We need time alone to think and to create, and we need time with people to communicate ideas, share our work, and get a break from being alone. Going between the two requires a shift in gears that can take time. So, taking an hour break for a social lunch, can easily add up to an hour for lunch plus an hour afterwards getting back into the mindset you were in before lunch.

So, when you eat alone, you are free to work on the problem you failed to solve before lunch, to catch up on your reading, to think about something entirely different, and you can preserve your mindset. Consider of this the next time you feel a pang of shame as you hunker down in front of your computer with a sandwich. Don't let the networking people guilt you into going out for lunch everyday. You have nothing to be ashamed of (as long as you keep the mustard off of the keyboard).

Questions for Review

As the end of the year draws near, it is review time at my company again. Here are some questions that I wish appeared on the review (oh well, they are good for my own personal reflection anyway):
  1. What risks have you taken this year?
  2. How much code have you eliminated?
  3. What tasks have you eliminated?
  4. How many people have you pissed off and why?
  5. How many projects did you start up that failed? (if you have a lot of failed start ups, thats a good thing)
  6. Who did you meet this year?
  7. How many design documents did you write?
  8. What new hobbies/interests that have nothing to do with your job do you have?

Monday, December 17, 2007

The Led Zeppelin Principles of Innovation

The recent Zeppelin reunion and a book on innovation (Weird Ideas That Work by Robert I. Sutton) I have been reading have given me the opportunity to reflect on the particularly innovative qualities of the band.

Whether you think Zeppelin rocked or sucked, there is no denying the innovative success of the band. No, they didn’t innovate the way jazz musicians innovate, but they were probably the most disruptive force to hit popular music in the roughly 25 years between the age of Elvis and the era or The Sugar Hill Gang and The Sex Pistols. Led Zeppelin was not only hugely commercially successful, but they created an entirely new category of music that to this day bears their mark clearly. How did they do it? Here are some lessons I think we can learn from Zeppelin about innovation:

  1. Be proud of other people’s rejection.

The story goes that the name “Led Zeppelin” came from an insult thrown at Jimmy Page: that his new band would go over like a “lead balloon”. Who names their band “Led Zeppelin” (in 1968)? Who names a company “Google” or “Yahoo!”? If people think your idea is going to fail, great. That’s less competition for you. If you are doing something innovative, don’t call it something traditional to try to make it palatable to the main stream, put your differences front and center.

  1. If you are doing something that seems ridiculous and makes you uncomfortable, then you might be doing something right.

Jimmy Page actually wanted Rod Stewart for the singer in his new band, but Jeff Beck got him instead. The legend is that Robert Plant’s over the top vocal and stage persona bothered Page so much at the beginning that he was ready to fire him after their first tour. What 26 year old seasoned musician wants to be on stage with an 18 year preening hippy doing a bizarre James Brown impersonation? In the end though, it is the un-self-conscious audacity of Zeppelin that made it so different from the myriad of other British bands imitating black American blues.

  1. Forget about tradition and following the proper path.

Led Zeppelin basically hacked traditional blues songs and called them their own. While this practice has many ethical issues, there is a lot to be admired in their lack of timidity in appropriating what they liked, and they weren’t overly concerned with following the established traditions of appropriation (i.e. performing traditionals as “traditionals”).

Jimmy Page, Jeff Beck, and Eric Clapton all came out of the Yardbirds, a British blues/garage rock band. Beck and Clapton continued on the trajectories set in the Yardbirds, refining traditional blues motifs and conservatively synthesizing them with rock. Page violated those traditions by stealing riffs and putting them in non-traditional contexts. For traditionalists, the original context is sacrosanct. If you are going to play the blues, you have to play it like a bluesman from Mississippi, not like a Viking. So, while Eric Clapton aspires to play “Crossroads” in as close an homage to Robert Johnson as possible (with some incremental changes), Jimmy Page took a Muddy Waters riff and turned into “A Whole Lotta Love”, the first Heavy Metal anthem of all time.

*Disclaimer: everything above on Led Zeppelin was taken from memory, from my many readings of Hammer of the Gods during the 8th grade. All of you devotees out there, please excuse any minor factual inaccuracies.

Monday, December 3, 2007

Display is Data

Traditionally, the judgment has been that HTML is entirely about display, is generally throw away, and at best is for graphic designers to lovingly handcraft into skillful and brittle, baroque shapes.

Or to put it more bluntly: “HTML is the crap the server has to spit out and we don’t care what it is, just as long as it makes the browser do what the spec says.”

But what about a world where HTML is data returned from web services, display is the domain of CSS only, and javascript runs on the margins of the page and is not embedded into the HTML?

For example:

Say you need to build a page that is going to display a table of data. Let’s say it’s a table of charges on an account that includes name, amount, and date. In standard web development practice, this page would be given to a graphic designer and then to a developer who will plug data into the designer’s display, and you might end up with something like this:

<table class=”dataTable”>

<tr>

<td class=”tableHeaderLeftSelected”>Name</td>

<td class=”tableHeader”>Amount</td>

<td class=”tableHeaderRight”>Date</td>

</tr>

<tr>

<td class=”tableDataLeft”><span class=”bold”>Bob Smith</span></td>

<td class=”tableData”>57.50</td>

<td class=”tableDataRight”><span class=”highlight”>December 1st, 2007</span></td>

</tr>

</table>

The above example was driven by the following requirements:

  1. the first column had to be left aligned
  2. the last column needs to be right aligned
  3. all other columns need to be center aligned
  4. dates need to be highlighted
  5. names need to be bold

What’s so bad about this? Nothing is terrible, and in fact, this is a big improvement over a lot of markup I have seen (no inline styles and relatively clear class names), but there are a number of ways this could be improved:

  1. use the HTML structure as data (hence the “Cascading” in CSS)
  2. break up the classes into semantic tags
  3. tag data, not styles

here is the transformed HTML:

<table class=”data accounts”>

<tr>

<td class=”header first selected”>Name</td>

<td class=”header”>Amount</td>

<td class=”header last”>Date</td>

</tr>

<tr>

<td class=”data first name”> Bob Smith</td>

<td class=”data currency dollars”>57.50</td>

<td class=”data last date”> December 1st, 2007</td>

</tr>

</table>

This not only lets you build cleaner CSS, but also allows you to load the markup with of latent capabilities (for example, lets say months down the road, you want to make all ‘dollar’ data green, you just need to make one change in the CSS), and provides clean hooks for plugging in functionality to this HTML using javascript without actually putting any code inline. This has a lot of advantages in browser mashups where the content of a page is built on the fly and we want to defer the definition of most behviors to the controlling page. For example, if the above HTML were returned as the result of an AJAX request, a querying utility such as dojo,query could be used to pick out the headers and attach event handlers to control sorting in a way appropriate to the particular page:

var headersArray = dojo.query(“.header”);

for (var i = 0; i < headersArray.length; i++) {

//attach event handler to each header
}

Of course, HTML is not an alternative to XML or other pure data formats, but it can provide a useful middle ground in many content syndication scenarios. And overall, data is a good paradigm to start with when implementing HTML. The W3C sums this up well in the Working Draft for HTML 5:

HTML should allow separation of content and presentation. For this reason, markup that expresses structure is usually preferred to purely presentational markup. However, structural markup is a means to an end such as media independence. Profound and detailed semantic encoding is not necessary if the end can be reached otherwise. Defining reasonable default presentation for different media may be sufficient. HTML strikes a balance between semantic expressiveness and practical usefulness

Wednesday, November 14, 2007

smells like teen spirit

I googled "kolba mashup" to see if my blog would come up.

It didn't.

But I got this gem as a consolation prize. Wow. We didn't have pep rallies like this in my high school.


http://www.tubarati.com/video/uvYl7-aR3Kw/Mini-playback-show-in-Kolba-City-CZARNO-CZARNI--NOGI-:PP

I think my next vacation has to be to "kolba city".

Monday, November 12, 2007

end of mashup camp

Today was the last day of Mashup Camp in Dublin. Here is a quick summary of days 2 & 3 (from my perspective):

Day 2
The first half of the day was given to more presentations as part of "Mashup University". Although, many of the presentations are interesting (and some are very interesting) they start to drag a bit after a full day of presentations yesterday.

Mashup, SOA or RIA?
Martha Rotter from Microsoft kicked off the day with another go at some of her material that had been a casualty of the wireless network the previous day. We saw some more demos of mashups using Silverlight with more of an emphasis on social networking functionality. Martha also touched on codeplex.com: "Microsoft's open source project hosting web site".

As usual, everything coming out of Silverlight looked very slick and pretty, yet I found myself often wondering "is this really a mashup?". This would become a reoccurring theme for the rest of the conference.

AOL & LOL
Next, there were two presentations on APIs from AOL. The first was a very funny one, by John Herren, on AOLs XDrive storage API. (John isn't with AOL, he was just filling in). The presentation is available here.

After John, Stephen Benedict (who really is from AOL) talked about Open AIM. The goals are to allow users to build their own custom AIM clients as well as allow users to take bits of functionality from the AOL messenger and plug it into their web apps. Developers can also write their own bots and and plugins for the client and contribute them. This is cool stuff, an intd Stephen made the very good point that it allows start-up sites to automatically tap into the AOL community.

Also, Stephen demoed what AOL is calling WIMZI. Which is an embeddable IM window that the owner of a site can put on a page using a small amount of HTML. The IM window lets end users of the site message the owner directly, anonymously, and also while protecting the owner's identity. This is very similar to what Meebo has done.

The Mashups are coming...
The first day, I had noticed a number of large poster displays (like the kind you'd see in an actual conference) featuring close up shots of people who are half modern primitive / half corporate suits. The most prominent image was of some tough looking guy with full body tattoos and big honking posts in his ears wearing a button down shirt and tie (and cuff links!). On the top of the poster it reads: "The Mashups are coming. Unleash your inner developer." The whole display seemed incongruous with the spirit of Mashup Camp, and when Serena did their presentation it became clear that the guy in the suit had gone and gotten the tats to mix with the young crowd. Serena is a 20 something old mainframe company that has taken their workflow application and hooked into SOA on the backend and onto HTML on the frontend. Not that there is anything wrong with any of this. I proudly work for a company that goes all the way back to the 1800's and that is in the middle of reinventing itself right now. I just didn't see how this was a mashup. To quote Fight Club: "sticking feathers up your ass does not make you a chicken", and I would say that calling Salesforce.com, while very nice, does not make you a mashup.

Kegerator
The last talk (that I can remember at least) was by Chad Dickerson, who is the director of the Yahoo Developer Network. He of course talked about the developer network and the great work they have been doing. Not just with APIs, Pipes, and YUI (as if that weren't enough), but with design patterns documentation and performance tunning recommendations and the YSlow plugin. If you are a web developer and haven't looked at Yahoo's Design Patterns and rules for Exception Performance, I strongly recommend you do. However, the biggest highlight of Chad's talk was his revelation that he is best known on the internet as a kegerator construction expert.

Geeking Out, Pubbing Out, and Install Hell
In the afternoon, we broke into discussion groups. The discussions and groups were participant lead and moderated, and are one of the hallmarks of the unconference. Unfortunately, at this point, I became caught in one of those installation/configuration hell's of a very deep level as the IBM team labored ceaselessly to determine what was wrong with my QEDWiki setup. About 36 hours and the tremendous help of Will, Flavio, and Dan (thanks again!) I now have QEDWiki running on my local machine -- just a little too late to enter the Mashup challenge.

The discussion I did get to participate in a bit was primarily around widget and service discoverability. The general consensus was that standards should adopted for metadata around widgets and services. While you can't really argue that this is a good idea, I do wonder what can really be implemented that will be practical, work for a wide enough range of situations, and that people will really use.

I am also not so sure that there is that much of a problem with discoverability to begin with. Or at least, a problem that wont be fixed with defacto standards coming out of particular business needs. Also, discovery is where the real value is in mashups, and where some of the key differentiation happens. If discovery were easy, we would have to come up with something else just as hard in order to compete. In other words, easily discovered data has little or no value. So, its nice to play around with, but nobody's going to build a business off of it.

The evening last night ended up in a couple of different pubs, in what was probably a relatively calm Sunday night out for many Dubliners. This morning, things got started a little late...

The Competition
The Competition was definitely one of the most exciting parts of Mashup Camp. After people had put together the mashups, speed geeking began, each mashup creator had 5 minutes to present their mashup to each observer. There was some interesting work done with QEDWiki (there was a QEDWiki mashup challenge as well as the Mashupcamp best mashup award being given - I helped judge for the QEDWiki challenge), and there were a number of mashups being done using mobile phones. The amount of mashup in the mashup varied from one (3rd prize) that very entertainingly made a game out of yahoo news headlines by substituting the terms in the headline with a result from a Yahoo image keyword search, (For example, "Bush Fires Staff" might show a picture of a shrub, a fire, and a big wooden stick, you then try to case what the actual headline is then click through to see the real headline and article.) to many that I would consider to be primarily standard web applications calling free services on the web rather then ones built from scratch. All in all, it was really impressive to see what people had done with very little time and (in some cases) very much beer.

Leaving Camp...
There was definitely a "leaving camp" feeling to leaving mashup camp that distinguishes it from other conferences. It's a nice feeling, and one that makes me reflect on how we can build a stronger development community where I work. I hope I can take some of these lessons back to Reuters.

Finally, as the mashup ecosystem develops I think its important that mashup doesn't become just another check mark to put on software and that it doesn't the term become become diluted to mean just calling a 3rd party service on the web. In my opinion, mashup implies some novelty of use where two (or more) incongruous objects are brought together in a way that creates a third. Or, it could be said this way: Mashups should be easy to build, but hard to conceive.

Saturday, November 10, 2007

mashup camp Dublin: Day 1

Winding down from my first day of mashup camp with some takeout chinese stirfry and a cup of tea. Here are my reflections on the day:

After an unsolicited and random tour of Dublin by my cab driver (Brahm Stocker's house, school where the guy who wrote the screenplay for the Commitments taught, etc) I arrived at mashup camp held in the Guinness Storehouse, fresh from an early morning flight out of London. Amazingly enough, mashup camp is held in the very place they brew Guinness beer. This will be a very interesting few days.

The first day and half of camp is dedicated to what is called "Mashup University". These are presentations by various contributors to the mashup ecosystem. Today they included several presentations from IBM on QEDWiki and the Mashup Hub, which together make their Mashup Starter Kit, Microsoft on their Popfly site showcasing mashups with Silverlight, John Musser from Programmable Web, and presentations from Kapow, AOL and others. (sorry if I missed anybody, I was a little late).

QEDWiki from IBM
I have had some experience with QEDWiki already. I saw a demo of it at AjaxWorld in SantaClara last year and later had the opportunity to work with Dan Gisolfi, who presented today, with doing a proof of concept with it around a project at Reuters. QED has come a long way in the past year, and I think that packaging it with MashupHub was a good idea. MashupHub is feed aggregator much along the lines of Yahoo Pipes (even with a similar slick GUI for linking feeds together). One major difference that is beneficial to potential enterprise users, is that the MashupHub can run on a private server, unlike Pipes which is all in the public domain. The combined kit from IBM attempts to enable the assembly of Mashups by non-technical domain experts who would work with widgets and feeds already built by the mashup enabler. This is an ambitious and important goal, since so many situational applications are never built for the simple reason that the domain expert who conceptualized the app is not able to create it directly. And engaging IT to build something from scratch is too expensive and/or too slow. QED covers a lot of ground, but we are not out of the woods yet. While it is true that you can build an application in QED without doing any coding, the assembling itself requires quite a bit of expertise of a technical sort, especially when it comes to resolving aggregation and filtering of disparate data feeds. These are problems that no one has the solution for yet, but IBM has done a lot of heavy lifting with QED and MashupHub.

Dan touched on many of the partnerships IBM is forming around the project: wiith Dapper, StrikeIron, SMILE and OpenSearch. He also summarized the Enterprise Mashup Challenge that IBM is hosting here for the best Enterprise Mashup built on QED.

One of the big problems with QED is aesthetics. Its clunky look detracts from its real utility, and risks undermining the confidence non-technical people will have in using it to build mashup applications. It’s a shame because if it just got polished, it would really shine. Ironically, the mashup hub looks far more finished, and this is the backend tool that feeds QED.


Hi Ho Silverlight and away

An interesting contrast to IBM's demo was the demo by Microsoft of the the Popfly community for Silverlight mashups. Here it was all style with a shortage of substance. The widgets in Silverlight looked awesome, zooming around the page in 3D and Martha (Rotter?), who was doing the demo was able to easily link the widgets to feeds, visualizing data in different formats effortlessly (barring network problems). However, most of the lush graphics have no meaning for me and in fact are a negative. What use could I have for the whack-a-mole visualization of data when I am trying to solve the workflow problems of high volume financial traders? Microsoft seems to be preoccupied with out-flashing flash here, in a bid which I don't think plays to Microsoft's strengths.


social networking

John Musser delivered what was an excellent summary of the evolution of web mashups he has been tracking with Programmable Web in the past 2 1/2 years. He focused largely (for good reason) on the rise of Social Networking platforms and discussed Google's OpenSocial project. John also gave some entertaining examples of mashups. My favorite was a mashup that actually used the Amazon.com wish list service to identify political subversives and locate them on a Google map. Oddly enough, they are mostly on the east and west costs. Check it out here.


Gregory Cypes' summary of AOL's AIM API project tied into John's presentation very well. Gregory also talked about AOL's use of the OpenID standard.


The day was wrapped up perfectly with the freshest pint of Guinness possible from the bar at the top of the storehouse. Tomorrow should be even more exciting as the gloves come off and the geekiness is unleashed.

Thursday, November 8, 2007

Enterprise Mashups?

It's the day before I go to Mashup Camp in Dublin. One of the focal points of this "unconference" will be "the enterprise mashup", which has become kind of a holy grail of web 2.0.

For some time now, I have been trying to resolve for myself how collaboration, mashups, and other web 2.0 buzzologies can really be relevant to my end users who are time-poor, technologically conservative, and only concerned with the bottom line. On top of this, technology in my enterprise world is wrapped in a tight web of permissions, corporate IP, and government regulation.

However, I am still optimistic that many of the technologies and models from the web 2.0 world have a great deal of value for the enterprise world. One of the problems seems to be that many of the guiding principals in the web 2.0 consumer world are incompatible with the enterprise world and need to be transformed. Here are a few:

Consumer Web 2.0 principals
  1. Information should be free
  2. Users want to interact with the system and use it for expression
  3. Build first, monetize later. For example: build up a community of people and then generate revenue through ads.
Enterprise Web 2.0 principals
  1. Information needs to be monetized
  2. Users want to use the system to do their jobs. They don't want to put anymore information into the system then their current work flow requires (and less would be even better).
  3. Everything must begin and end with a business case. Advertising is not a relevant business model for premium enterprise applications, which must demonstrably cut costs, generate more revenue, or both.

Monday, October 29, 2007

House without Halls (an Anti Pattern parable)


Imagine you are an architect building a house.

First, you go to one builder and you ask him (or her) to build you a kitchen, you describe in adequate detail the cabinets you want and what kind of sink, stove, etc.

Next, you go to another builder and ask for a living room. And then to another builder for a bathroom, and another builder for a bedroom, etc.

All of the builders go off into separate shops and build these rooms, then they bring them back to you. And you can just push them together into a house. Right?

You quickly discover that there is no way to go from one room to the next, since only one door was needed to build each room. You find that you can travel between the rooms by going through the windows, but this becomes confusing when you have guests, since there is no common and predictable way for people to flow through the house. You don't know which room to wait in for your guests to arrive, and if they have come into a different room then you, you have to wander around, going in and out of windows, until you find them. Not having a clear point of entry for the house has made any opening into the house as likely an entrance as the next one.

You described to each builder exactly what you wanted in each room, and how many windows you wanted, but you didn't specify how large each room should be, how it should be shaped, or how the windows should be placed. Each room is built from the inside-out rather then from the outside-in. So, without considered edges, each room is a haphazard polygon and you are forced to fit rooms together by complimentary shape rather then by function. So, after you have prepared dinner in the kitchen, you have to carry it out the window, around the outside of a quarter of the house, back into a window for the bedroom, where by a bit of luck, there is an interior facing window opening out onto the dining room (this is the short cut, you'd have to walk clear around to the other side of the house to get to the exterior facing window of the dining room).

"This isn't so bad" you tell yourself. "once it becomes part of my daily routine, I am sure I can optimize."

http://en.wikipedia.org/wiki/Anti-pattern

Friday, October 26, 2007

what's in a name?

Well, I finally did it. I am finally going to make a blog post. After almost a decade of being a web developer, and years of procrastination (and actually starting blogs and writings for blogs that I fussed with indefinitely and never made public), I will finally press the "publish" button. I promise.

I won't even preview. I'm just going to publish.

really.

Part of the problem with the web is coming up with a name. Naming something on the web is kind of like naming a pet. With the added caveat that your name has to be unique.

Sorry 'Fido''s already taken. Try again.

So, all of the obvious names get filled up fast, and the clever ones a bit faster and your left with having to come up with something really, really clever or something like "nickkolbaswebdevelopmentblog" that just screams "I wasn't clever enough to come up with a good name for my blog".

So anyway, here I am debugging massive amounts of javascript and I decide on a name.

I don't know, maybe there's a connection.

OK, here's what nullcheck means to me:

  1. It sounds like 'soundcheck'
  2. Its one of those things that you have to do, and everybody is guilty of 'forgetting' to do at least once in a while.
  3. It could be a metaphor (you decide), which appeals to my philosophical tendencies.

Alright, I am going to post this. Stay tuned for:

  1. possibly some interesting and useful information on javascript, ajax, mashups and other things I am occupying myself with
  2. insight into my life as a web developer
  3. probably a lot of bulleted lists



OK. I lied, I did preview first. Hitting publish now...