Monday, November 03, 2008

Google vote map wins 3 to 1, but I'll had to drive 3 hours

Apparently "Pulaski" is a common Ukrainian name, and they like to put American Citizens Clubs on streets with that name.

I saw Scott Hanselmans's tweet about Google's 2008 US Voter Info map.

For the heck of it, I tried my current address and 3 places I used to live, one of which being the town of Cohoes, outside of Albany NY (the capital city, in what is considered to be upstate NY).

Google suggested that I drive 170 mi to the Ukrainian American Citizens Club on Pulaski Avenue in Staten Island. Interestingly, the place I *did* vote 4 years ago was the Ukrainian American Citizens Club on Pulaski Street in Cohoes, NY. They had the right place, (almost) the right street name, just the wrong city.

It's a good thing I moved, or I'd have to forfeit my vote as the absentee ballot probably wouldn't get there in time.

Wednesday, October 15, 2008

Tron guy is a PC

Who says Microsoft doesn't have a sense of humor? Not only are they copying the image of John Hodgman's buttoned up "PC" from the Mac ads, a recent marketing email (obviously a long delayed rebuttal to those ads which accompanies the TV ad series) includes a thumbnail of none other than the Tron guy.

Thursday, October 02, 2008

Sometimes Google scares me

My parents just got a new dog.  They sent me a short note about it when it came home for the first time.  In the email the following words are mentioned that relate to dogs within the context of the message, those that could be understood to have to do with dogs outside the message context are in bold:
  • spayed
  • standard poodle
  • lab
  • brush
  • paw (lots of animals have paws, and it's a verb)
My gmail account showed some ads as it always does, they can be seen to the right.  This was one of the rare occasions that I actually looked at them.

Among the ads is one for "Australian Labradoodles".   My parent's new dog just happens to be a lab poodle mix.  However, nowhere in the message is that mentioned apart from the simultaneous occurrence of "lab" with "poodle" in the message.  Obviously though, any astute advertiser would pair those words for their labradoodle business.  Interestingly though, none of the ads seem to directly deal with having animals fixed.

It just amazes me how accurate the google ads can be given so little to go on.  I recently finished reading "The Google Story" which explains a bit about the general ideas behind google's methodologies of search ranking, but doesn't say too much about it's ad matching techniques apart from the word bidding concepts (understandably).  (Very good book, by the way, I highly recommend it.)

I wonder how similar the results would be if I wrote an email saying:
After I brush my teeth I'm going to paw through a catalog then take the poodle to be spayed.

Wednesday, September 10, 2008

Identifying servers in a web farm with IIS headers

Once your application has been deployed to a web server farm, it can become tricky to track down problems.  Particularly when a problem occurs intermittently.  Sometimes these intermittent problems are such because they are occurring on only one machine of a web farm.  

Identifying the problem server can be rather challenging.  Often the first attempt is to modify your local DNS (in Windows it's the HOSTS file) to point the site URL to a single machine.  Depending on how your web farm is set up you may not be able to do this because the individual machines may not be visible to you.  Only the farm's pool address is visible.  Furthermore, sometimes the problems we encounter do not manifest themselves when running on a single environment (otherwise we'd have caught them in development right??).  To complicate the matters moreso, often the only chance you have to identify on which machine the problem occurred is right when it occurred, as in, when you are staring at the application crash page.  Simply attempting to replicate the problem after you set up your tracking may not be enough.

A simple solution I have implemented on our staging and production web farms involves nothing more than the built in HTTP headers supplied by IIS.  First, just add an HTTP header to each machine in the farm that contains the name of the machine, or any other unique value that you can map to the machine:

Then, when you browse to the site or are looking at an error message, you can open a tool like Fiddler or FireBug to view the page's HTTP header information for the response.

Particularly with a tool such as FireBug or another DOM inspector, you can get immediate information without having to start any kind of tracking tool or needing to relaunch the site.

Thursday, September 04, 2008

Correction: Chrome does cheque speling

I recently posted about chrome and complained that it doesn't have spell check.  Apparently it does.  But it doesn't seem to work in Blogger.  Odd considering Google owns Blogger.

Spellcheck seems to work in regular textboxes, but Blogger's "compose" view doesn't use that it seems.  I'm not sure what it uses though.  I tried the Chrome page inspector but I can't see what it's doing.

Oh well, I suppose that's part of being beta.

Wednesday, September 03, 2008

Enriching .NET Windows Apps with the WebBrowser control

I am currently working on a desktop product upgrade project.  The old versions were developed on the .NET 1.1 platform.  They utilized a web browser control to display information for printing.  Unfortunately the support for hosting a web browser control in a windows form was poor so the original development team had to create their own control with hooks into the Internet Explorer objects and such to do it.  Among other changes, we are doing a complete re-write of this application in .NET 2.0.  (We still need to support some older platforms, so 3.0 wasn't an option.)

One of the requirements of the new version dictated that we need to do some more intricate display of the information.  The prime display control candidate for this information is a "tree-grid" hybrid.  The standard .NET 2.0 toolbox simply doesn't have a control that can handle what we need (a shame really).  There are many third party controls that could do this, but that introduced a learning curve that our project timeline simply wouldn't support.  Obviously, there is additional cost involved with such a control as well.  As I evaluated what we needed to achieve, and being a web developer, I naturally looked towards HTML as a solution.  The desired output could be executed very simply with standard HTML constructs.

While analyzing the current application architecture and scoping out how we were going to integrate the changes, we decided to change the data storage strategy as well.  The application uses a local MS-Access database with a small set of tables to store standard relational data.  Given that we really don't need relational access to the ancillary tables of the data model, we determined that we could simplify the whole thing greatly by reducing the data architecture to a single table with a blob of XML to represent the bulk of the record's detail.

Now that we have serialization going on for database storage, the natural step was to use that XML for display.  One of the great additions made in fx 2.0 is the System.Windows.Forms.WebBrowser control class.  This provides simple, native hosting of a web browser inside a windows form.  To solve the complex display problem, we placed a web browser control on the main user control, serialize the data model object instance to XML, apply an XSL transformation to it then feed the result directly to the web browser control in the form.  The browser control has the property "DocumentText" which you can write HTML directly to.  Elegant, simple and surprisingly fast.  A natural side effect of this strategy is that it becomes trivial to change the view of the data: simply develop a different style sheet and provide a switching mechanism.

While generating XML, transforming it to HTML and displaying it on the form was now very easy with the browser control, the big question was, how do we interact with it.  

In order for the browser to interact with the user control, it must be exposed to COM by being decorated with the COMVisibleAttribute class.

public partial class OurUserControl : UserControl

This exposes the object to COM and thus allows the COM based browser to see and interact with it.  Now we need to tell the browser what to interact with.  The browser control has the property "ObjectForScripting".  You give this property any object in your windows application context.  In our case, when the user control is created, we hand the browser the actual control instance.

public OurUserControl()
webBrowser1.ObjectForScripting = this;

This exposes the object to the web browser's window as "window.external".

In order for the browser context (i.e. the HTML DOM) to call the methods on the win form context scripting object (the windows user control), we need to make some methods visible.  This is simply a matter of making a public method on the object you have exposed to the browser:

//this is managed code in the win forms application
public void DoSomething(args...)
//do stuff in the windows app here

The browser can now call that method by calling the document window's "external" object.

/* This is "client-side" javascript living
 in the HTML sent to the web browser */
function doSomething(){

Now you can call this javascript method as you would in normal HTML.

When it comes to argument types, there seems to be a certain amount of implicit conversion going on.  In my experiments, I found that a javascript based variable that was typed to a numeric value came into the managed method call as such.  So if you used "parseInt(...)" in javascript, you should expect that the managed call will receive a proper System.Int32 as the argument value.  In most cases what I'm dealing with are strings so the argument values slip right through without any fuss.  If the value type doesn't match up, you'll get an exception about it for sure.

The web browser control also allows you to access the browser's DOM.  The Document property returns an instance of HtmlDocument.  From here you can get at an instance of an HTML control and manipulate the DOM as you see fit from the managed code.  The browser control itself has many methods and properties to direct it as needed.

All in all, I'm finding the web browser to be a compelling tool for developing richer windows forms applications using my existing web knowledge and without the need for purchasing additional control libraries.  In a very short time, my team has been able to do some very good proof of concept work that is leading into rapid development of something that only a short time ago had me very concerned for the project timeline.  We did some initial tests with a basic prototype on various platforms to ensure we weren't getting into a snag.  The application ran happily on Windows XP, Windows 2000 and Windows Vista.  Those are our target platforms, so I'm pleased with the outcome.

The recent (yesterday) release of Google Chrome suggests that we might one day be able to integrate the open source WebKit rendering engine as an alternative to the embedded Internet Explorer browser control.  Aside from the obvious decoupling of our application from I.E., this could also mean that our application could potentially run on a non-windows .NET platform such as Mono.  However, we'll stick with I.E. and windows for now.  The enhancements we are making with the web browser rendering and .NET 2.0 upgrade are enough for this go around.

Google Chrome: Shiny, new and very cool

I just finished reading the 38 page comic style technical overview book on Chrome.  They implemented some pretty interesting things.  Of definite interest is the process isolation used to separate the tab and plugin processes.  As a GMail user, I notice the performance affects on the other tabs when I'm running GMail.  Also, with the bloated Flash and PDF plugins, it seems that so many page loads just bog everything down.  It will be interesting to see what the performance is like with Chrome.  

Other particularly interesting features are the "incognito" mode tab that does not log any history and the new-from-scratch javascript engine/virtual machine they are calling V8.

A couple things I've already noticed that I'm a little dissappointed with: There doesn't seem to be an automatic spell check like Mozilla has, and the zoom feature is only a text size changer, not a natural zoom like Opera or the latest Mozilla.  Perhaps they are still working on that.

Despite being currently beta, as are so many of Goggle's applications, it's pretty fair to say that it's still better than many other applications.  Being open source, it will be fun to see all the projects that spin off from the various pieces that make up this new browser.

I'm definately going to give Chrome a whirl and see how my web experience changes.  And of course, now I need to test all my web applications to see if they behave!

Saturday, August 30, 2008

MaxiVista as a Keyboard/Mouse switch

In order to connect to my workplace VPN, I am required to use my work laptop which has a custom VPN client. Unfortunately, they don't give us the VPN client so I'm unable to install it on another machine such as my home computer. I don't have the extra hardware or space at home to set up a full workstation for the laptop with my preferred keyboard and mouse. However, I have those set up on my home workstation and a copy of MaxiVista.

MaxiVista is a utility that allows you to use another physical PC connected to your LAN as an additional monitor. It's not capable of high speed graphics but quite suitable for development. When you move the mouse pointer across screen boundaries, you switch between your locally attached display and the network attached display. One of the features of MaxiVista is the ability to switch the remote screen between "additional monitor" mode and "remote control" mode. This basically allows you to use MaxiVista as an automatic K/M switch, using your primary PC (running the MaxiVista server) as the hardware host for the keyboard and mouse but work on the remote PC desktop.

So I fire up my laptop, connect to the work VPN, then use the remote control method of MaxiVista on my home workstation to use the better keyboard and mouse on my laptop. While working, I can instantly move over to my home workstation screens. This can be particularly helpful when I want to test web applications or .NET remoting scenarios I'm developing on the laptop. I can jump right to my home environment to test network behavior.

Wednesday, August 13, 2008

Serving Neuros OSD Media Files With IIS

I recently got a Neuros OSD working on my home network. It's hooked up to my DVR box. I DVR programs (mostly for my kid), then use the DVR's "Copy to VCR" feature to dump them to the Neuros and out to my NAS server. The primary purpose is to archive shows somewhere other than the DVR. Of course, the handy benefit of being able to watch them on a laptop or make DVDs from them helps too.

The first problem I encountered was that using the Neuros' default settings for the "TV format" recording left it in a mostly unusable state on my PC. The video is encoded in MP4 while the audio is AAC. It is likely that I just don't have all the right CODECs on my windows machines to run it. Despite that, I soon discovered that I could change the audio format of the recording. Switching it to MP3 did the trick.

I recently came across some segments of a science show that I wanted to share with some friends. So I recorded them to the NAS. The NAS share that contains all my media hangs off my server at home so I figured it was fairly trivial to just point my friends at the MP4 file. This didn't work. IIS reported a 404 error.

At first I figured it was a problem with some URL encoding, because the directory and file names had some punctuation and such that I thought might mess it up. But after cleaning the names to letters only, it still failed. I then remembered something a co-worker told me about IIS. Apparently IIS is set up with a list of MIME types and this list determines what it will serve. MP4 isn't one of them. I had always assumed that IIS would just serve any file. Of course, in some cases the file types are mapped to ASAPI filters or application extensions for more advanced handling, such as for PHP or ASP.NET. But it was a surprise to find that certain types simply wouldn't be served.

So... I added the MIME type "video/mp4" with the .mp4 extension and I can now at least download the file from the site. However, it seems that I can't watch it until it's fully downloaded. Not a major problem, but it would be nice to be able to start watching right away. Downloading a 10 minute video took about 3.5 minutes, so I should be able to watch it in a streaming fashion. I imagine this is just an issue with the MP4 format.

Wednesday, August 06, 2008

Firefox 3: Down in flames

I successfully installed Firefox 3 on WinXPPro-SP2 this evening. I already had FF2. Despite a clean install, I was unable to actually launch it. I got an immediate crash and the subsequent crash detector happily sent my crash report to Mozilla. After the 3rd time I gave up.

I uninstalled it and tried again. No change. So I downloaded and installed and I'm back up. I suspect now that the problem might lie with FireBug and its compatibility with FF3.

All I was really trying to do was reinstall with the DOM inspector so I could look at style sheet behavior. Fortunately, that is now working. I was also happy to see that all my settings, customizations and add-ons were seamlessly restored. Kudos to Mozilla (at least for pre version 3).

Friday, July 25, 2008

Clearing cached passwords in Windows

For quite some time I'd struggled with the occasional problem of cached passwords in Windows. You've probably had this happen before. You have some network resource such as a file server share that you need to access. When accessing it you are prompted for Windows credentials. So you put in a username and password, and because you don't want to go through that process every time, you select "Remember my password".

But then one day something doesn't work or you simply want to change how you are connecting to that resource.

I originally thought it might have something to do with the cached network connection itself. Going into a command prompt and typing "net use" will show you all the network resource connections you've made since the last login/boot. You can also use "net use" to disconnect them. However, if you've saved a password for the resource you still won't be prompted for one even on a new connection.

The question remains: Where exactly do those passwords go??

Go to Start -> Control Panel -> User Accounts

Select the "Manage Passwords" button. You'll get the "Stored User Names and Passwords" dialog:

This contains Microsoft Passport/Windows Live logins that are associated with the current logged in user profile as well as the saved passwords for network connections. You can add, remove or change as you like.

Wednesday, July 23, 2008

Reflection: Personal Uniqueless

Yes, the title says uniqueless, that being the opposite of uniqueness.

A friend of mine uses the the following tagline on his forum posts:
"Everyone is unique, except for me."
When I was growing up, I was told, "You are special... there's no one else like you." That was a nice sentiment. Unfortunately, it's not quite so true. With a world population of nearly 6.7 Billion people, the chances are pretty good that there is someone who's just like me out there, probably even in my own country. (Given cultural influences, it's probably even more likely.) They might even look like me! (Scary thought, and my sincerest apologies.)

Being ensconced firmly in the information society, it's easier more than ever to find people who are like you. You come up with a great idea, google it, and find people who have already done it. Most of the good web site URLs are taken. There's probably someone, somewhere with your name, married to someone with your spouse's name, with the same set of kids. Statistics dictate that's its more than likely.

It's sad really. Growing up as an "individual", thinking your ideas are unique when they most likely aren't. Is this cynicism or just the acceptance of reality? Instead of thinking that my ideas are "special" and "one-of-a-kind", I now take them to the internet and see who else has them to learn more. I get disappointed when I can't find anything, thinking to myself "I must not be looking hard enough, someone must have thought of this already." Or I think that if I can't find anyone else who's thought of this and publicized it, it mustn't be that good of an idea. It is provoking to think about how information availability can provide and discourage.

We may look, act and behave differently, but like the Earth, we are all made up mostly of water and ultimately are just 1 single drop in the very large ocean of the human race.

Monday, July 21, 2008

Merits of a System.Collections.Hashtable

I was recently working on a very simple data analysis process involving two lists of items. One is a list of files on disk, another a list of files in a database table. The work sets were on the order of 85,000 items each. I needed to simply scan each list to see if each item existed in the other. I loaded the two lists into a generic string collection (List) and iterated through each one once, looking in the other for the existence of the item (otherList.Contains(item)). I ran the process and it took a good 2 to 3 minutes to complete. Reasonable I though, as it was many 10s of thousands.

I then thought about the way the code was probably running internally. I imagined that the List.Contains method is probably just doing it's own internal loop and comparing the item against each one in the list until it finds a match. Otherwise, it runs through to the end. I remembered that the Hashtable organizes the list of items by the hash of the key so it might have a better and more efficient way of finding items based on that logic. I refactored the code to use Hashtable instead of the List. In addition to finding that I had some duplicates in the list coming from the database (fixed after adding a "DISTINCT" to the query) I found that the the process now took about 5 seconds! That's an insane improvement. I'm curious how the Hashtable does it's thing.

Tuesday, July 15, 2008

Neuros OSD + WRT160N + DD-WRT = Media center happiness

For a while I have had my eyes on the Neuros OSD: "The Open, Embedded Media Center". Not long ago, I got one. I hooked it up to my digital cable DVR and started playing with it. Unfortunately, I don't have network cabling down to my living room where the entertainment center lives so I was limited to using flash/thumb drives plugged into the front of the OSD. Having only a couple thumb drives of a few gig, this greatly limited the amount of recording I could do, basically a 30 minute show (at highest video size and quality) per drive. While experimenting I relied on sneakernet to move the recorded shows to my 0.5 Terabyte network attached storage.

Whether or not to network the OSD wasn't a question, it would be networked. However, I pondered for a while how I would get it hooked up physically. Due to the limitations in running a physical cable, I was left with the possibility of setting it up via WIFI. Understandably, the OSD does not come with WIFI, only a standard 100Mb RJ45 jack for direct connection to an ethernet network.

So I was off to my local Best Buy to see what they had for wireless devices. I found a standard Linksys wireless access point whose box hinted at the capability of being a wireless bridge. 60 bucks for a single wire to wireless bridge. Meh. For 100 bucks they had a wireless game bridge. Huh?? How is that any different? For 70 bucks I could get a whole standard router/gateway/access point. Despite any marketing-speak suggestion that I might be able to use a standard router as a bridge, I picked the Linksys WRT160N as the hardware for this experiment knowing where I was headed next...

The factory installed firmware for the WRT160N is not much different than previous versions I've seen in the stock Linksys routers (both wired [BEFSR41] and wifi [WRT54G] versions). The options are acceptable but not terribly flexible and there doesn't seem to be a way to configure the router as a wifi bridge. So in comes DD-WRT, an open source wireless router firmware alternative. I followed the specific instructions for the firmware version for my new router, flashing it with the new DD-WRT version (v24). I set up the wireless settings to match my network configuration (enabled WEP, entered keys), added the new router's WLAN MAC address to the MAC address restriction list on my existing access point/gateway, and gave the router the right static IP information to put it on the same subnet as the rest of my gear. After a reboot, I was able to traverse the wireless bridge connection and see network resources on the other side!

Much of this time during setup I had a laptop hooked up and set with a static IP. The new router has DHCP disabled so I needed to ensure that I could connect. To ensure I wasn't getting false positives on my tests I disabled the laptop wireless to ensure everything was going over the wire to the bridge router. Once it appeared that the bridge was functioning properly, I reverted the laptop network adapter settings back to standard DHCP mode to see if I could pick up an address from the gateway/AP/DHCP server. Sure enough, it worked!

At this point I was very delighted that all seemed to be working, however I remained cautiously optimistic, as I still needed to get the OSD wired to it and talking. (There was a very small glint of skepticism because I bought the OSD off a friend who had had problems getting it connected to his network.) I moved the router into it's place into the entertainment cabinet and hooked it up to the OSD. After power cycling both I went to the OSD networking settings screen and instructed it to look for a wireless bridge. Apparently this is not what I needed, as it failed. I suspect that this is for a specific wireless connector device. So instead, I instructed it to configure automatically using DHCP. Again, despite suspicions that I'd be plagued with problems, it worked!

After making the decision to try the wireless bridge approach, I was greatly concerned about the amount of network throughput the OSD would need to reliably stream a live recording to a network drive. I started testing it by playing back some recordings I'd already moved to the NAS box. These played without difficulty. Then I set up a recording and let it run. This didn't seem to have any issues either. (I didn't watch the entire recording so I can't really say for sure that it completed without a glitch.)

Later on I discovered the Bandwidth Monitoring tab under the Status tab of the DD-WRT firmware admin site. This provides a live graph of the WAN, LAN and WLAN network usage in both bits/sec and bytes/sec. While recording at the highest settings for recording resolution, quality and audio sampling, the OSD appears to only be pushing out about 400 kilobytes per second. My existing wireless access point is a WRT54G set to wireles-G mode so the new router is limited to that, however, there is still ample bandwidth and throughput available to service the OSD reliably by the looks of it. I am pleasantly surprised at how well it is all working. (Of course, something will break tomorrow for sure just to prove me wrong.)

Friday, June 27, 2008

ASP.NET tactics: Saving files to network shares

Getting a web app to talk to the local file system is hard enough. Getting it to talk to (and certainly to write files) to a networked system is even more difficult.

The initial problem is that the security context that the site runs as is a local one. To complicate things, the security context changes as you progress through the life of an ASP.NET web request. The IIS service typically runs as "Network Service". However, you can set a particular site or virtual directory to run under another user. Furthermore, you can configure ASP.NET to impersonate yet another user. So it can be very difficult to figure out exactly "who" it is that is doing some action against the file system.

As I mentioned, the user context for these processes are local ones. In order to successfully access a network share you'll need to run your site and possibly process(es) as domain accounts in order to get security contexts between machines that will cooperate. Of course, this then also requires both machines being members of the same or trusted domains.

One solution may be to create a virtual directory in the site you are trying to do this in. That virtual directory will point to the network location you wish to save files to. When you set it up you can specify what user to connect as, and in this case you can specify an external user context, such as a local account from the other machine (MACHINENAME\USER). This would likely be a bit more secure that modifying the IIS sites or process. When you save files, you'll just save them to a "local" virtual path such as /mysite/myMappedVirtualDir.

Unfortunately, setting up things like this typically becomes trial-and-error. You have to just keep trying different things until it works. It is understandable that web site technologies adhere to the principle of least privilege, but it often gets in the way of simply getting things done.

Monday, June 23, 2008

Networking a Windows 98 virtual machine

Yes, sadly, I had to create a VirtualPC instance of Windows 98.

I'm getting into a project for an app that must support Windows 98, if you can believe that. Fortunately in this day and age I can run 98 through virtualization, and it runs very fast on the modern hardware.

I installed the OS without incident. However, TCP/IP networking just didn't seem to want to work. I didn't get any errors regarding hardware or drivers. The OS found a network adapter called "Intel 21140 based 10/100 mbps Ethernet Controller" that it liked and had a driver for. I had everything set properly to work on the network I am on. It just wouldn't acquire an address.

I googled a bit and found a forum post that wasn't terribly useful but led me to looking at the Virtual PC settings screen for the virtual machine. Under the "Networking" setting, you can choose the number of adapters and what each adapter is. It seems the default is "Nortel IPSECSHM Adapter" which Windows 98 doesn't seem to like much.

Another adapter choice was "Intel(R) PRO/1000 PL Network Connection" which is the same name of the adapter on the physical machine (or at least the name that shows in the Virtual PC host OS, Windows XP in my case). So I switched it to that and started the virtual machine. Presto Chango! It started right up and the network adapter got an IP address from the DHCP server.

Friday, June 06, 2008

Productivity with key cords

You can assign keyboard short-cuts to start menu items in Windows. Visual Studio allows you to map every known command to keyboard shortcuts. This can be a major help for those programs and commands that you use often. My philosophy is that the keyboard (and thus the keys) are always in the same place (very helpful when you have multiple monitors and a lot of ground to cover with a mouse cursor). Even when mousing, my left hand is always on the keyboard so many of my most frequently used shortcuts are left-hand-only. It's amazing how easy it is and how much time it can save.

Below are my favorites. Each is preceded with an indicator of where the shortcut is defined ([W]dows or [V]isual [S]tudio). Some of the Visual Studio cords are more applicable for web application development.

W: Ctrl-Alt-S
Launch source control. (Probably only useful for you if you use a explorer type utility for you particular source control system such as VSS or Vault.)

W: Ctrl-Alt-Shift-S
Launch SQL Server management studio.

W: Ctrl-Alt-D
Launch Windows command shell.

W: Ctrl-Alt-Shift-P
Launch Windows PowerShell.

W: Ctrl-Alt-F
Launch Firefox.

W: Ctrl-Alt-C
Launch Beyond Compare, my favorite comparison tool.

W: Ctrl-Alt-E
Launch Internet Explorer.

W: Ctrl-Alt-X
Launch Microsoft Excel.

W: Ctrl-Alt-M
Launch mail app (Outlook in my case).

W: Ctrl-Alt-V
Launch Visual Studio 2005.

W: Ctrl-Alt-Shift-V
Launch Visual Studio 2008.

W: Ctrl-Alt-O
Launch Opera.

W: Ctrl-Alt-P
Launch PasswordSafe (password management tool).

VS: Ctrl-Alt-Shift-A
Attach to processes. I use this to manually attach to running processes instead of always launching apps using "Run" which starts the browser and attaches to the server and browser. Usually I just run web apps in an active browser and attach as needed. 9 out of 10 times the problems in the code are obvious and I don't need to actually step through code. Plus, when you have a web app that loads up session variables, requires logins, etc. it's far easier to just dive into debugging then to have to re-launch and go thru all the steps to reproduce a problem.

VS: Ctrl-Alt-Shift-D
Detach from all processes.

VS: Ctrl-Alt-B
Build solution

VS: F6
Build the active project. Very handy for when you are fixing simple compile time errors. If I have a project with many errors that I'm working on, I will quite often just rebuild with this instead of hunting for the collapsed error list as I work my way through the errors.

VS: Ctrl-Alt-B (VS default shortcut)
Build solution

VS: Ctrl-Alt-Shift-B
Re-build solution. For me, this one does require two hands. But I use it less often as "Build solution" and my right hand is usually on the keyboard already.

VS: Ctrl-Alt-Shift-C
Show differences between working file and source controlled copy. (This VS command only works when the focus is on the solution explorer). However, that is only a matter of the next shortcut.

VS: Ctrl-Alt-L
Move focus to the solution explorer. You can move back and forth between the solution explorer and the active code editing window with this and ESC.

VS: Ctrl-Alt-Shift-H
Show file history in source control.

VS: Ctrl-Shift-F4
Close all documents.

The one catch with using shortcuts is that you have to avoid overlap. You can't set up a short cut in a particular application that is the same as one used in Windows, as Windows will catch it first. That's why I have some commands modified with the Shift key, such as the process detach short cut that would otherwise conflict with launching a command shell.

Aside from these, I also use some of the standard built in Windows short-cuts:
WinKey-E: Windows Explorer
WinKey-R: Start -> Run
WinKey-M: Minimize all windows
Ctrl-Shift-ESC: Task manager

Overall, once you start using keyboard shortcuts you'll find it far easier and faster to execute tasks. People often comment on how fast I am when working. It's not so much that I'm faster than them, I just use the apps and tools in a more efficient way. We all have enough work to do, there's no reason to make it harder by wasting time hunting for programs and commands that we use all the times.

Friday, May 30, 2008

Web Site vs. Web Application Project

In Visual Studio 2005 and later, there are some significant differences between a "web site" and a "web application project" (WAP).

Project structure
A web site has no project file. The "site" is simply the collection of files in the site's directory. Project/binary references and other configuration settings are stored in the web.config file (poor form in my opinion).

A web application project does have a project file, it's treated as a class library project. However, the visual studio template for a WAP provides some additional things such as what types of items are visible in the "Add new item" dialog (i.e. web form, master page, user control, web.config, etc) and configuration of debugging such as the settings for the development web server or IIS.

Codebehind/Codefile attribute
In a WAP, the markup directive (@Page, @Control, etc.) contains the "Codebehind" attribute. This is actually meaningless to the ASP.NET runtime, it's a linking attribute used by visual studio to indicate what the code-behind file is for the markup file.
In a site, the "Codefile" attribute is used. This is similar to the "Src" attribute. (I've experimented with the two and can't find a significant difference between them.) It tells the ASP.NET runtime what source code file should be compiled together with the markup. This is what links a markup file to a code behind file in the dynamic architecture of web sites.

In both a site and WAP, the markup (AS?X files) are dynamically compiled. There is an exception but it's an advanced topic. All code files (including page code-behind) for a WAP are always pre-compiled. In a site, nothing is pre compiled. The ASP.NET runtime will compile everything in the App_Code directory into one DLL and each page will get compiled into its own DLL. This affects the class scope.

Class scope
Only code in App_Code is available to all classes in a site (that's where you HAVE to put shared code). In the WAP - because it's pre-compiled - all page classes live together in the same assembly and can thus see each other.

Perhaps the largest difference between the two is with the namespaces are constructed.
In a WAP all classes are created by default as members of the root namespace defined in the project (typically the project name). For example, in a project named "MyProject" the new page "MyPage" will have a fully qualified class name of "MyProject.MyPage". When you create sub directories in the project, visual studio creates another namespace level for pages created in those directories by default. So if I create a folder "Admin" and another page "MyPage" I will get a class name of "MyProject.Admin.MyPage".

In a site, all pages are part of the default root namespace for dynamically compiled pages: "ASP". Class names are created with underscore separation of their location when they live in sub directories. In a web site, instead of "MyProject.Admin.MyPage" the page class would actually be "Admin_MyPage". When it's dynamically compiled it will become "ASP.Admin_MyPage".

Which to chose
It is important to chose the right project type. With the changes introduced in Visual Studio 2005, it is now much easier to work with either type of project (no more IIS integration, woohoo!). Being able to open a web site via FTP is very helpful for certain needs. For some, the web site model will be ample. It's great for tests or simple sites that aren't code intensive.

However, I have found that in professional development the WAP is the better choice. Because there is a project file "controlling" the project it's easier to manage it with regards to what is actually included in the project which helps to control things such as the source control repository items for the project. In my case, having the project file is also necessary for the build system as the project file provides the parameters for what to build for a given project.

Yes, using a WAP forces us to always precompile the application. On the down side, this makes updates more difficult because any other changes are rolled in with it, we can't just update one single page. However, this is good in several ways.

Simply put, production code should not be updated willy-nilly. We need to exercise a fair amount of control over what gets pushed to production. The app should be regression tested by QA. Also, with a good build system and source control practices, you can do updates as necessary to deploy patches without including changes being made in a given applications main trunk. If you do need to make a change, there are ways to "patch" a single page by reverting it to the web site code file model.

Another benefit of using the WAP is that the project configuration is kept in the project file instead of in the web.config, where it really doesn't belong. This keeps the concerns (configuration of the actual app versus configuration of the project within Visual Studio) well separated.

Yet another good aspect to the WAP is that you can "see" all the classes in the project - they are all within the scope of the entire assembly. In some large projects with many developers and many pages that require query string arguments to function I've used a technique for doing "strongly typed page urls". Follow the link for more details, but in short: I create static page methods that return a properly formed URL. Using a managed method provides the opportunity to force required page parameters by using regular method arguments.

This is all obviously very biased towards using the WAP. This is partially due to where ASP.NET development started, in 1.1, with the web project. In the interest of full disclosure, I haven't worked with the web site model enough to really speak fairly for it. However, between the little I've worked with it and from what I've heard from speaking with other developers, for anything that isn't a trivial web site, the WAP is the way to go. The web site type is good in some cases, but as with any tool, it should be used where appropriate. Fortunately, Visual Studio has pretty good support for converting a web site to a web application project, so starting upgrading from a site is not terribly difficult.

Thursday, May 29, 2008

Mobilized Organization

LifeHack has a good article on staying organized using a mobile phone.

I have started use google calendar in this way a bit. I'm fortunate in that I have no commute to work and I'm generally either at work or at home so I'm hardly away from a computer. Plus, I generally don't have that much going on that I need to schedule.

Good article though.

Wednesday, May 21, 2008

Continous Monitoring & cool gizmos

I just listened to an interesting interview on Hanselminutes with Owen Rogers, one of the original developers of CCNet. They discussed continuous integration and continuous monitoring. Well worth a listen.

Get the show here

During the interview Scott and Owen mentioned a few technologies/products I hadn't heard of yet. One is Gumstix, which are super-micro Linux computers, literally the size of a stick of chewing gum. Another is Chumby a wifi-connected open Linux platform alarm clock on steroids. Some very cool stuff that I definitely need to learn more about (and of course will eventually succumb to purchasing).

Thursday, May 15, 2008

Progress - in bytes per second

Just over 10 years ago I had a good day connected to the internet. I was on a dial-up ISP getting a 57,600 bps connection speed. I was on for nearly 15 hours and received over 151 Megabytes! WOW! Before I disconnected I took a screen shot for posterity.

I recently downloaded some ISOs of Ubuntu Linux on my broadband cable connection. The download of about 524 Megabytes took maybe 10 minutes and maxed out at 1,027 KB/sec. Not to shabby for a sustained speed.

I just did a speed test and got an astounding 9151 kb/sec! If my math is correct, that's an increase of 9,313,024, or 16,168%!! I guess I shouldn't expect less from a 10 year gap. It's just too bad that I can't drive 161 times faster than I did in '98. (But I think gas prices are trying to keep up!)

I can't wait until fiber is competitively priced.

Ubuntu Adventures: WP/mysql/smbfs

Captains Log: 14 May 2008

Tonight I continued my experiments with Linux.

I managed to install mysql-server although I haven't yet gotten any databases set up yet. I also installed WordPress but also don't have that running yet.

The bigger achievement was getting the Linux box to see the file shares on my HP MediaVault NAS box. I found the HP instructions for doing this and had a go. I tried mounting it using NFS but it didn't seem to want to do anything. So I ended up installing Samba ("apt-get install smbfs"). Then I was able to mount using smb. After finding instructions on how to set up the credentials bit of it, I was able to configure the /etc/fstab file to provide automatic mounting of the NAS shares. Very cool.

In retrospect, I think NFS might work. I realized after smb failed the first time that the NetBios name of the box was resolving. I did a quick ping test, and saw a reply but I didn't even look at the reply details long enough to realize what happened. My ISP has recently started replying to all unresolved DNS names with some crummy parking page on their servers. It's screwed me up more than once. Some of my machine names don't resolve like they used to.

One of my secondary goals (if possible) is to be able to put the subversion repositories on the NAS box instead of on the linux server itself so I have some level of hardware redundancy (the NAS is set up with a mirrored volume set at the moment). I think that by default subversion uses Berkeley DB as the repository data store but you can change that to use just the file system. If BDB can't be used over a smbfs (which I suspect it can't) then I'll try a file system based repository which hopefully will work. If neither work, then I guess I'll just have to create a cron job (another learning curve) to regularly backup the repo data stores to the NAS.

One step at a time.

Ubuntu Adventures: The beginning

Captain's Log: 5 May 2008

Mission: Install linux (yet again) and find an actual use for it. I've installed it several times before, but after finishing, just stared at the login prompt and thought to myself, "Well, now what do I do with it?" This time, I have some goals in mind:
  • Convert my source control system to subversion (I've tried subversion on WAMP on Windows Server 2003, but it hasn't worked yet. Good excuse to upgrade to Linux.)
  • Move my blog from blogger to my own server using WordPress.
  • Be able to actually claim to have some minuscule clue about a non-Microsoft OS.

  • Old Compaq desktop
  • Intel Celeron 500MHz
  • 128MB RAM
  • 250GB HD

I have another box (Pentium 4 - 550MHz; 384MB RAM) that is currently running Windows Server. It is currently hosting FTP and my current source repository (SourceGear Vault). However, if I can find success with subversion on the Linux installation then I may decommission Windows and switch that box over to a fresh Linux set.

I installed Ubuntu Linux server. I attempted to install 8.04 first but it failed for unknown reasons. However, 6.10 (Edgy Eft) succeeded. During the installation I am pretty sure I selected the LAMP installation option. But to be honest I might have done it wrong - I did it during a fly-by while chasing down my 2 year old. Perhaps I didn't select the right option. Anyway, after installation I found that neither apache, mysql nor php were installed (at least I got the L part of it working).

After doing some searching I discovered the apt-get command. I ran it with some upgrade steps and it updated several packages and modules. I then used it to install apache and php. Later I tried "apt-get install subversion" and it worked. I'm starting to like this!

Once subversion was installed I created a repository, then started playing with TortoiseSVN from my Windows desktop to put in my whole source tree. I'll likely blow away the whole repository once I figure out what I'm doing but I'm making progress.

It's taking some time getting used to the different style of system administration. I'm so used to all the windows GUI tools for changing settings. However, I'm really liking the transparency and plain text methodologies of Linux.

Friday, May 09, 2008

Wait, Wait... Busted

My wife and I attended a live performance of the NPR program Wait Wait... Don't Tell Me. It was very entertaining. The panelists were Charlie Pierce, Amy Dickinson and Mo Rocca. The scheduled local guest was planned as Governor Elliot Spitzer, however, as host Peter Sagal said "Some came up. (Then the governor paid $4000 and it went down again.)" I'll leave it there. In former governor's place was a large arrangement of flowers.

To my delight, the replacement guest was television celebrigeek Adam Savage from Discovery channel's MythBusters! He connected in from a studio in California and was interviewed for a good 20 minutes. It would have been far cooler to have him there, but it was fun regardless.

Having worked in radio for many years I am always interested to watch radio show production. This performance was no exception. They have call in contestants, a few sound effects and quick thinking participants that the producers and engineers have to work around. After the show completed they spend 10 minutes and went back through the show doing some re-do takes where they needed to clean up introductions or whatever. Having a large (~2600 people) live audience complicates it a bit as well.

Overall, it was a good time and a chance to get out the house.

Saturday, May 03, 2008

Finding myself

I've struggled for many years finding and/or creating a digital identity for myself. I've never had a catchy screen name or hacker name or handle or whatever you want to call it. I'm not creative in that way. The creativity I do posses is with solving tangible problems in both tactile and abstract domains. I enjoy handyman type work, do woodworking as a hobby and of course, I work as a software developer so I'm constantly coming up with solutions to technology challenges. But I'm simply no good at coming up with things out of thin air. That's why I don't dance, draw or partake in other activities that I'd generally classify as visual art. It usually requires some form of inspiration from a greater force which I lack. My inspiration comes from the problems that need to be solved. (I suppose this is probably true for most technologists.)

So anyway, I've found it rather difficult to come up with a name to use for my internet presence or for this blog. Once, my sister said "You're the biggest geek dork I know." So, still lacking a name different than that given to me while still tethered to my mother's womb, I went and registered This certainly fits my general self classification as a geek and dork but it just feels a tad too sophomoric. I've tried a few names similar to those I've seen on other's blogs, but I hate the feeling of being a sheep just following the shepherds . But like I have stressed, I just don't have what it takes to make up something good.

Despite all this, for some reason, yesterday the phrase "compiled thoughts" popped into my head. It sounded like a good blog title and certainly reflects the whole notion of today's trend of aggregating ones mental randomness and uploading it to the likes of blogs, twitter, or what-have-you. I did some googling and found very little use of those words together outside of discussions on writing. I figured that the domain name "" must be taken, but to my surprise, it was not so I grabbed it.

I've pointed the domain at this blog for now and renamed it accordingly. However, I still don't have a "name" for myself, per se. At this point I guess I'll just keep using my real name, it's boring but easy to remember. At least now I have a title for the blog that I actually like. Plus, it sounds mildly intellectual.

Downloading from the series of tubes

Yesterday I was working on an automation process to deal with some vendor data. Unfortunately, the vendor doesn't have the data on an FTP location and names the files with dates so they change whenever updated. The files have to be downloaded manually from the vendor's web site after logging in. Not a process that's terribly easy to perform automatically.

One of my coworkers had already written the bulk of the screen scraping logic that logs in and looks at the download page for the links to the file names of the available downloads. This works great. He had put in the code to actually download the file using the HttpWebRequest, HttpWebResponse and byte stream classes. I commenced some testing and found that only a portion of the data was getting downloaded, leaving the file (a Zip in this case) corrupt. I googled a bit and found some articles with various suggestions on how to process the response stream from the web response class. It seems most people had problems with this seemingly simple task. Then I ran across a suggestion to use the WebClient.GetData() method. It was only about four lines of code.

As I pasted it into the program I decided to check out this class I had yet to use, WebClient. Low and behold, there was also a method called DownloadFile(). What started as a dozen lines of code for manipulating a byte stream that ultimately never even worked was now reduced to a single call:

new WebClient().DownloadFile(downloadFileUrl, downloadPath);

It's always a great feeling when you discover a class you didn't even know existed in the .NET framework that provides exactly what you are looking for. I'm happy to know that I don't need to become an expert at handling byte streams, instead I can focus on the business problem that I was trying to solve.

However, it leaves me to wonder endlessly about how many classes or methods I still don't know about that might allow me to reduce the code I write and solve problems in a much cleaning and robust way.

Thursday, May 01, 2008

Beware of non-specific references

I recently completed a change to a web application that utilizes the ASP.NET AJAX web extensions (System.Web.Extensions.dll). This assembly is loaded into the Global Assembly Cache (GAC) and referenced to there by the web project. I ran the web app locally without any issues. After updating the source code repository with my changes, I asked the build server to create a new release candidate of the app. This worked fine.

I then deployed it to the staging/test server and hit the URL. Failure! The error I received was

"Parser Error Message: The base class includes the field 'UpdatePanel1', but its type (System.Web.UI.UpdatePanel) is not compatible with the type of control (System.Web.UI.UpdatePanel)."

Clearly, a System.Web.UI.UpdatePanel is a System.Web.UI.UpdatePanel. So I investigated further. My web.config file contained this:

<compilation defaultLanguage="c#" debug="true">
<add assembly="System.Web.Extensions, Version=1.0.61025.0,
Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>

This provides the assembly version the web app is loading for the creation of the dynamically compiled pages. Thus, the update panel created from the markup is the one from the version 1.0.61025.0 assembly.

I started looking at the web application assembly. Using BeyondCompare with a conversion rule to process .DLLs with ILDASM I was able to look at the compiled assembly references. I found my version of the assembly to have the reference as

.assembly extern System.Web.Extensions
.publickeytoken = (31 BF 38 56 AD 36 4E 35 ) // 1.8V.6N5
.ver 1:0:61025:0

while the build server's version had

.assembly extern System.Web.Extensions
.publickeytoken = (31 BF 38 56 AD 36 4E 35 ) // 1.8V.6N5
.ver 3:5:0:0

So, the code behind instance of the update panel in the prebuilt web app assembly is from the referenced assembly while the runtime instance from the markup is the other version. This was the culprit.

This happened because Visual Studio created the reference to the assembly that is in the GAC but created it with the "Specific Version" flag set to "false". When the project was built on the build server it took the newer assembly for the reference. I changed the flag to "true" in the project.

After committing the change I asked for another build. Now the build server built assembly had the correct version referenced and the app runs.

Thursday, April 17, 2008

MediaVault Mirror Added | deddA rorriM tluaVaideM

Monday night I ordered a second drive from NewEgg to match the one already in my HP MediaVault. It came on Wednesday (way fast delivery!) and I installed it into the box.

After a little fighting with the administration tool, I finally got it set up as a mirror. I tried several times and kept getting either no response (either the page timed out or it just returned me back to the disk settings screen) or I'd get an error. I couldn't figure out what on earth was wrong. It is a brand new drive, shouldn't have anything on it. I didn't format it as the mirror is a block-by-block feature so it shouldn't require an pre-formatting. What finally worked was that I selected the new disk and chose "Erase Disk". The erase process only took seconds but I noticed a change in the recognized drive space. I then tried the mirror process again and it worked right away. None of the half dozen sets of instructions I read mentioned anything about erasing the drive first, but that is what seemed to be needed.

So now the main disk volume is in the rebuilding phase and should be done in some 12 hours. I feel better at least having a mirror of the data. The next step is to get a removable backup system in place. Fortunately, the MediaVault has USB ports to which you can connect mass storage devices. So I just need to get a USB hard drive or similar so I can back up periodically and safely store away.

Semi-re-written URLs and the ~ resolver

I decided to try out using the PathInfo property of the HttpRequest class. Instead of the typical use of querystring vars


I thought I'd try out creating a friendlier URL such as


Instead of using the querystring to get the keyed values, I'd use the PathInfo collection instead. This in itself worked fine, however a small problem arises.

The app uses master pages and thus does some URL resolution for resources such as the CSS file. I've noticed over the years that sometimes the path will resolve to a root relative path:


and sometimes to a relative path:


I haven't found any way to control this behavior and I've wondered what determines it. Today I wondered if it might actually be somewhat sensible in that it will be resolved to the smallest result. For example if the resource is only 1 directory off, it might be resolved to a back reference (../???) whereas if it's a long was off (perhaps several directories back up) it will resolve to the root (/mysite/???). There doesn't seem to be a rhyme or reason to it.

The end result is that when I browse to my semi-re-written URL using path info data, the browser doesn't care that it's path info, so my CSS file reference ends up being something logically like this:


which resolves to


which clearly isn't what I've intended.

If I can determine how to force the ~ resolution behavior to be root relative, then it will fix the problem. Otherwise, I can't use PathInfo and similarly I can't do any other kind of URL re-writing. :-( I suppose I could just make all my resource references root relative, but then I have to significantly change my development strategies. I tend to favor running in IIS (using virtual directories) for 2 reasons: 1) It's more realistic; 2) I work on some apps that consume web resources from other sites that I also run on my dev machine in IIS which is simpler than dealing with starting and managing 2 instances of Cassini.

Hopefully there's a way to force the behavior, otherwise it's back to query string vars.

Monday, April 14, 2008

Super Dead-End?

What happened to the information super-highway?

Back in the "old days" (maybe nearly 5 years ago) we used to hear all this talk of the information super-highway. I haven't heard that in a while. I've heard about social networking/computing, semantic web, and of course the ΓΌber web 2.0. Sites with useful or nifty services (google, flickr,, wikipedia) or those with viral penetration (myspace, facebook, youtube) have become parts of our new lexicon.

But what's happened to the highway? Are we on it? Have we surpassed it? Has information technology graduated up to flying cars and sky-ways and left the super-highway asphalt old and cracked far below, metaphorically speaking? We just never hear anyone talk about it any more. I miss the old highway.

Maybe I'll dust off my old US Robotics 28.8 Kbaud modem and see if I can remember what the old ride used to feel like. Although, I don't think it will feel quite the same over my VOIP based phone service carried on a 6.5 megabit cable line. And does windows Vista even support the serial port?

You can never really go home again.

Friday, April 11, 2008


Yesterday at work, for the first time in 8 years of using Visual Source Safe, I was confronted with the real prospect of having to attempt to branch a set of Visual Studio .NET projects. Up until now, I have managed to avoid the need, I'm not sure how.

After some lengthy discussions with the development team, I created a Sandbox VSS database and imported a set of VS2005 projects that make of a single solution. I created a VSS directory to contain branches, set up the hierarchy and executed a "Share and Branch" operation for each project in the solution into the branch directory. This worked fine.

We then started looking at the merge functionality. I knew all of this existed but had never used it except for once many years ago when the team I was on tried using the "multiple checkout" mode of VSS (we switched back to exclusive checkout not long after). Our main discovery was that you must merge every file individually. This isn't to say that you have to merge them manually however. While we didn't encounter it this time in the test, my recollection is that VSS will merge automatically unless there is a conflict, then you get the diff/merge dialog to decide what goes. The important point is that there doesn't seem to be a way to merge an entire VSS directory back to it's initial branch in the tree (lacking the "trunk" terminology in VSS). You must initiate each merge operation on each file manually. With 10s of projects in a solution, dozens of changed files and 100s of coexisting files, it is not viable to manually merge every file. It would be faster to use Floppy/Sneaker Net.

So I'm becoming more and more convinced that we are closing in on the end of VSS in our organization. I've been using SVN (Tortoise & Ankh) a bit lately and it seems to work pretty well. SVN certainly has far better branching support.

One of my concerns with any SCM system is how to execute branches within the context of visual studio solutions. If I branch one or more projects then the solution file will need to change, or more practically we'll just create another one for the branch. But then the project files change as well as the physical location of projects change which needs to be reflected in the project references. Then, after completing work on a branch when a project file is merged back into the trunk you have to take care to merge project references back to their proper state.

In addition to all of this, the build system we have been using may encounter issues because of the way the build is executed. However, if the branched project files are committed correctly then it should behave correctly.

I think I need to do some extensive reading on:
A) How branching and merging works best for people
B) How to work with Visual Studio using branched projects

Monday, March 31, 2008

VS Snippet: On-demand getter

I often use base page classes in my web apps. On these classes I put read-only properties for access to business layer classes. I typically set these getters up to do on-demand (or "lazy" instantiation) because not all pages will use the various business class instances that are available. Instead of creating them all on class construction or page load, the get created as needed. After figuring out how to fix the "prop" shortcut to work the way I needed, I realized it made sense to create a snippet for on-demand properties. Now I simply type "propod" and I get this:

private object myVar;

public object MyProperty
if(myVar == null)
myVar = new object();
return myVar;

Here's a Visual Studio shortcut snippet file XML for it. Just save it to a .snippet file in your visual studio snippets directory (i.e. C:\Program Files\Microsoft Visual Studio 9.0\VC#\Snippets\1033\Visual C#):

<?xml version="1.0" encoding="utf-8" ?>
<CodeSnippets xmlns="">
<CodeSnippet Format="1.0.0">
<Description>Code snippet for on-demand read-only
property and backing field.</Description>
<Author>Peter Lanoie</Author>
<ToolTip>Property type</ToolTip>
<ToolTip>Property name</ToolTip>
<ToolTip>The variable backing this property</ToolTip>
<Code Language="csharp">
<![CDATA[private $type$ $field$;

public $type$ $property$
if($field$ == null)
$field$ = new $type$();
return $field$;

C# "prop" shortcut in Visual Studio 2008

Anyone who builds classes in Visual Studio hopefully uses the "prop" shortcut to generate a property with its backing variable. Just type "prop" and hit tab twice and you get this:

private int myVar;

public int MyProperty
get { return myVar; }
set { myVar = value; }

Update the fields and away you go. This is a major time saver. I was disappointed to see Microsoft change its behavior in the 2008 upgrade. The new behavior is to generate an automatic property:

public int MyProperty { get; set; }

I learned recently where these shortcuts, or snippets are stored. It turns out it's pretty easy to modify or add to them. In Visual Studio 2005, you can find them here:

C:\Program Files\Microsoft Visual Studio 8\VC#\Snippets\1033\Visual C#

For 2008, here:

C:\Program Files\Microsoft Visual Studio 9.0\VC#\Snippets\1033\Visual C#

It appears that there aren't that many changes between versions. Of course, the one they changed is the one I suspect most of us use the most often. Shame on them.

I'm not here to argue the merits of full vs. automatic properties. I'm all for automatic properties. The problem happens when we return from VS2008 "Hello World" example to our real world code base. VS2008 happily upgrades an assembly I created in VS2005 but keeps it backward compatible. However, it seems that it is not smart enough to recognize that this assembly is targeting the 2.0 framework or rather that the project will still be used in VS2005. (I suppose one might argue "how would it know?") All those automatic properties won't compile in the 2.0 compiler while the 3.0+ compilers expand them out automatically.

So I decided that rather than cursing every time I have to manually expand a property I would instead fix the problem. Simply manipulating the .snippet files in the directory mentioned earlier does the trick.

I copied the "prop" snippet from the VS2005 directory into that for 2008 and renamed the "prop" snippet in 2008 "aprop" ("propa" was taken). You just need to be sure you edit the snippet XML to rename the snippet's shortcut and name as well, they are the values that show up in the IDE.

An interesting side note to this: Having forgetting to update the snippet shortcut and name in its XML, I tried it out and discovered that VS recognizes the duplicate names. It prompts you with "Multiple Snippets" and you must make another choice. Neat. Someone was thinking.

Thursday, March 20, 2008

Old hardware + young kid = tearful parent

It all started with Alex. A few days ago my friend Alex asked me if my web sites were still down. You see, I host a couple of web sites on an old machine at home. It's mostly just personal stuff but there's some (very little really) professionally relevant content on one of the sites. Unfortunately, the machine is starting to break down. The fan squeaks and rattles on occasion, I've had a hard drive inexplicably fail and then return to life and the board is just old and slow for what I need it to do. Despite this, yesterday morning I turned the server back on.

Some time in the middle of the morning I remembered that I wanted to check to see if the machine had actually gotten up and running, so I hit the sites on it, one of which is my personal photo browser. In doing this, I started browsing early photos of my son. Then the Dan Fogelberg song "Longer" popped into my head. So I downloaded the MP3 from the music collection also hosted on this box at home. I listened to it a few times and got to thinking about my kid. I can't say that I've ever been a particularly emotional person, but it's amazing what a kid will do to you. I'm fortunate to have a private office tucked away in a quiet hallway.

So yesterday evening, after putting Spencer to bed, I put this together:

Now I just need to put this video into a digital photo frame that I can hang above Spencer's time out chair to help dissipate the frustration a 2-year-old can create.

Thursday, March 13, 2008

abstract VS. virtual explained

Something I struggled with for a while when first getting into C# was the difference and use of the 'abstract' and 'virtual' keywords. Here is my simple explanation:

abstract - Members modified by this keyword MUST be defined in an abstract class and be implemented on any concrete (non-abstract) class that extends the class the member is defined on. (This corresponds to the 'MustOverride' keyword in VB.NET.)

virtual - Members modified by this keyword CAN be overriden on a concrete class that extends the class that defines the member. (This corresponds to the 'Overridable' keyword in VB.NET.)

C# Example:

abstract class AbstractClass
protected void NotModifiedMethod() { }
protected abstract void AbstractMethod();
protected virtual void VirtualMethod() { }

class ConcreteClass : AbstractClass
//This HAS to be overriden because it's abstract
protected override void AbstractMethod() { }

//This CAN be overriden because it's virtual
protected override void VirtualMethod()

//You CAN NOT do this, because NotModifiedMethod is not
//marked as abstract or virtual
protected override void NotModifiedMethod() { }

VB.NET Example:

MustInherit Class AbstractClass

Protected Sub NonModifiedMethod()
End Sub

Protected MustOverride Sub MustOverrideMethod()

Protected Overridable Sub OverridableMethod()
End Sub

End Class

Class ConcreteClass
Inherits AbstractClass

'This HAS to be overriden because it's marked MustOverride
Protected Overrides Sub MustOverrideMethod()
End Sub

'This CAN be overriden because it's marked Overridable
Protected Overrides Sub OverridableMethod()
End Sub

'You CAN NOT do this, because NotModifiedMethod is not
'marked as MustOverride or Overridable
Protected Overrides Sub NonModifiedMethod()
End Sub

End Class

Wednesday, February 20, 2008

Interfacing Hardware with .NET

I went to my local .NET user group meeting last night and watched a very exciting presentation by Brian Peek on interfacing hardware with .NET. The presentation focused around the use of Phidgets USB controller boards to read and control various inputs and outputs. Phidgets has the controller boards as well as a whole host of input devices themselves for factors such as RFID scanning, temperature, position/rotation, and light. They also have some small servo motors. To round it all out, they provide APIs for .NET, Visual Basic, VBA, LabView, Java, Delphi, C and C++. The examples Brian showed us looked ridiculously simple to program with the API they provide. It certainly puts a new spin on the various ideas I've had over the years for doing home automation.

The last part of Brian's presentation was about interfacing with the Wiimote from .NET using the WiimoteLib managed library he developed. He put everything together and showed us a simple R/C car controlled by a Phidgets digital i/o board using the Wiimote as the remote control. It was the first time I'd ever seen a .NET user group so enthralled.

Tuesday, February 19, 2008

Lingering check-outs: Increasing VS Build Stability

For those of you who use the checkout-modify-checkin method of source code control within Visual Studio (I.e. many Visual Source Safe users) here's some advise to increasing integration ease and build stability between developers and a build system.

As you start to share more VS projects, it's important to be aware of what you have checked out, particularly the project files. When adding items, if you check in the project file, but not the new item itself, the project will not build because of the missing item in source control. However, if you leave the project file checked out while you work on the new item, another developer won't be able to checkout the project to modify it or add items to it because of your exclusive checkout. What will often happen is that one developer is in the process of building the new item when someone else needs to work with the project. So the first developer checks in both the project and the new item. However, the other developer still can't build the project due to the half finished item. This leads to wasted time while the unfinished, breaking code is commented out and checked in. This results in confusing noise in source file history.

The best way of avoiding this scenario is to keep project files checked in as much as possible. My experience has shown the following process for adding items to projects to be most affective:

- Let Visual Studio check out the project when adding the new item.
- Immediately check in the project and new item as it was added. Generally a build test isn't necessary on new items because they are void of non-compiling code. (Continuous Integration can help ensure this even if your local build doesn't work due to other changes you are making.)
- Proceed to edit the new item which will need to be checked back out.

By using this procedure the project file gets checked in and the new (empty) item is added to source control resulting in a successful build. Now when another developer needs to modify the project or add an item to it, they'll get the updated project as well as the empty item which will compile. They can then keep working without compile problems.

Here are a few other things I've found to help:
- If you don't already, try using Visual Studio's Pending Checkins toolbox window so you can easily see what is checked out to you within the solution. It's much easier than searching thru the solution explorer or using VSS explorer. Using this reduces the likelihood of forgetting to check in a file.
- Don't let Visual Studio automatically check out files. You'll be much more likely to remember that you need to check files in if you were forced to acknowledge their checkout.

I realize that another solution to this problem is to use the modify-merge-commit method. However, the fact is that many development teams use Visual Source Safe, and VSS doesn't have the best merge tools so the exclusive checkout method is preferred for it's stability.

Even working on aggressive projects with many developers working in the same projects, I've found that 2 developers are rarely working in the same exact file(s). Using this technique to maintain the project integrity goes a long way to reducing interference between developers.

Tuesday, February 12, 2008

Enlightening Insomnia

It's rather interesting that so much mental stimulation can come out of insomnia.

I recently had a side job doing technical editing for a technology book. It involved lots of reading. My wife works evenings and we have a toddler so my time after work is consumed by him so I wasn't able to do much work on my reviewing until after 8 pm most nights. As a result I struggled to finish the evening's block of work to me nightly fatigue. However, after I finished, I'd check email and get distracted by something that would lead to continually cascading diversions until before I knew it, the little hand on the clock had worked it's way past 1. There I'd be more awake then when I put my son to bed almost 5 hours prior. Of course, the very fact I stayed up too late would be the source of the next evening's difficulties. Oh the vicious cycle.

At the moment it's almost 3:30 AM. I went to bed early quite tired and with a headache. I think I slept for a few hours, then awoke for my nightly fight with the cat over my claim of the bed. This was followed by an hour+ of tossing and turning as I thought about work and side projects, other miscellaneous distractions and my resurfacing headache. After a while I gave up, took some pills and came to the computer. I have since spent about 30 seconds checking email and several hours writing blog posts and looking up random word definitions on google.

I wish I had a hibernate button.

Developers today: Learning to fish

I have been a member of the Wrox programmer to programmer (p2p) forum for many years now. When I first joined it, the forum was a standard mailing list community populated by a real good bunch of folks including some Wrox authors themselves. I recall most of the questions to be fairly thought out and asked by people who had run out of options trying to solve their problems. I started as a new developer learning classic ASP. I had books, I read them. I tried code examples and experimented with new things in an effort to understand how things worked and to educate myself so I could do my job. But when I hit a brick wall or just wanted to understand why some code behaved the way it did I would email the list. I got a lot of great help there and subsequently met a good friend through the list. I owe much of my initial education in web and ASP development to the books and that list.

In part due to a feeling of obligation to give back to the digital community that helped me and simply because I like to educate people by sharing my experience, today my participation on the forum is that of a contributor instead of an inquisitor. I enjoy helping people learn and understand that which I can explain. Even when a question is asked about a topic I might be unfamiliar with I think about a solution and try to provide suggestions, even if they are only abstract, conceptual ideas. Often I become intrigued with the question and stop what I'm working on, to the detriment of my own productivity, and try to solve it myself in an effort to quench my own curiosity. Never have I done so without learning something new.

This nature of mine is what makes me a successful software developer and is, I believe, a trait necessary to those that are in the same profession. I enjoy solving problems and exploring the technologies I use to find better and faster ways to solve those problems. Most often I find a problem I am having has already been had by others and that there are solutions available within a short distance of a thoughtful search.

Getting the Fish
It seems that more and more people pursuing software development lack the natural tendency to solve problems. I am finding that an increasing number of participants on the p2p forum are looking only for the answer to today's riddle and not an explanation of the underlying problem. They want sample code or in many cases, they just want someone to do it for them. In extreme cases a post contains what suspiciously looks like a homework assignment with the query "answer these questions", without so much as a "please". I highly doubt the poster is providing a "quiz of the day" to help us keep our minds sharp".

In the cases of an inquiry for an explicit answer, most often a few seconds of searching will reveal countless pages of sample code or reference documentation relating to the problem at hand. It baffles me to think that someone likely spent more time writing a forum post than it would have taken to search and find the answer. This is evident by the number of responses on the forum that are simply links to other forum threads, articles, tutorials, documentation or simply a google search with the keywords of the original post topic. Smart contributors aren't likely to spend their time regurgitating available information.

Learning to Fish
I hesitate to provide discreet answers to questions that I know a poster should be able to solve on their own. Often I'll reply with a question to which the answer should reveal the solution. I want to help people learn to think about the problem or how to solve it instead of just providing them the answer.

A short post that simply says "here's my code, please fix it" (which I could easily do so) is less likely to get a response from me than one that has been well thought out and takes longer to read. The former is simply a cop-out while the latter conveys the poster's interest in understanding the problem. It is surprisingly easy to spot posters who are genuinely stuck and are just missing something tricky that I've encountered before. The answer is easy to provide and an explanation gives the inquirer a tidbit of knowledge that is not so easy to find. Any case where the explanation is far more valuable than the answer itself is much more enticing to reply to with the goal, of course, being that the person asking will obtain some knowledge that will help them in the future to solve similar problems.

The inherent nature of problem solving seems to be missing in more and more people. I suppose this observation is simply influenced by the effects of the nature itself. People who know how to solve problems are not the ones asking for discreet answers. They are finding answers on their own. However, I refuse to believe that they are absent altogether. I often find a solution to a problem I'm facing, but still don't understand why the solution works, or why the problem exists in the first place. Most of my own inquiries on the forum are explanations of the problem I had, the solution I found and the question of whether others have encountered this and why it works the way it does. These posts are quests for further knowledge of the question and answer, not the answer itself. Unfortunately, these questions often go unanswered. Occasionally they spark very stimulating conversation that, while not necessarily answering the question, forces further exploration of the problem. These are my favorite threads.

Availability of fish
Analogous to a grocery store, the internet is a grand marketplace of food for the knowledge appetite. Are people becoming less capable of solving problems due to the access to answers? Is this eliminating the need to know how to the grow knowledge and solve problems? Have people become complacent or simply lazy by the prospect that answers are out there and they need not know themselves how to solve them? I hope not. My goal is to help people understand the why and the how and not just find the what.