Thursday, April 17, 2008

MediaVault Mirror Added | deddA rorriM tluaVaideM

Monday night I ordered a second drive from NewEgg to match the one already in my HP MediaVault. It came on Wednesday (way fast delivery!) and I installed it into the box.

After a little fighting with the administration tool, I finally got it set up as a mirror. I tried several times and kept getting either no response (either the page timed out or it just returned me back to the disk settings screen) or I'd get an error. I couldn't figure out what on earth was wrong. It is a brand new drive, shouldn't have anything on it. I didn't format it as the mirror is a block-by-block feature so it shouldn't require an pre-formatting. What finally worked was that I selected the new disk and chose "Erase Disk". The erase process only took seconds but I noticed a change in the recognized drive space. I then tried the mirror process again and it worked right away. None of the half dozen sets of instructions I read mentioned anything about erasing the drive first, but that is what seemed to be needed.

So now the main disk volume is in the rebuilding phase and should be done in some 12 hours. I feel better at least having a mirror of the data. The next step is to get a removable backup system in place. Fortunately, the MediaVault has USB ports to which you can connect mass storage devices. So I just need to get a USB hard drive or similar so I can back up periodically and safely store away.

Semi-re-written URLs and the ~ resolver

I decided to try out using the PathInfo property of the HttpRequest class. Instead of the typical use of querystring vars


I thought I'd try out creating a friendlier URL such as


Instead of using the querystring to get the keyed values, I'd use the PathInfo collection instead. This in itself worked fine, however a small problem arises.

The app uses master pages and thus does some URL resolution for resources such as the CSS file. I've noticed over the years that sometimes the path will resolve to a root relative path:


and sometimes to a relative path:


I haven't found any way to control this behavior and I've wondered what determines it. Today I wondered if it might actually be somewhat sensible in that it will be resolved to the smallest result. For example if the resource is only 1 directory off, it might be resolved to a back reference (../???) whereas if it's a long was off (perhaps several directories back up) it will resolve to the root (/mysite/???). There doesn't seem to be a rhyme or reason to it.

The end result is that when I browse to my semi-re-written URL using path info data, the browser doesn't care that it's path info, so my CSS file reference ends up being something logically like this:


which resolves to


which clearly isn't what I've intended.

If I can determine how to force the ~ resolution behavior to be root relative, then it will fix the problem. Otherwise, I can't use PathInfo and similarly I can't do any other kind of URL re-writing. :-( I suppose I could just make all my resource references root relative, but then I have to significantly change my development strategies. I tend to favor running in IIS (using virtual directories) for 2 reasons: 1) It's more realistic; 2) I work on some apps that consume web resources from other sites that I also run on my dev machine in IIS which is simpler than dealing with starting and managing 2 instances of Cassini.

Hopefully there's a way to force the behavior, otherwise it's back to query string vars.

Monday, April 14, 2008

Super Dead-End?

What happened to the information super-highway?

Back in the "old days" (maybe nearly 5 years ago) we used to hear all this talk of the information super-highway. I haven't heard that in a while. I've heard about social networking/computing, semantic web, and of course the ├╝ber web 2.0. Sites with useful or nifty services (google, flickr,, wikipedia) or those with viral penetration (myspace, facebook, youtube) have become parts of our new lexicon.

But what's happened to the highway? Are we on it? Have we surpassed it? Has information technology graduated up to flying cars and sky-ways and left the super-highway asphalt old and cracked far below, metaphorically speaking? We just never hear anyone talk about it any more. I miss the old highway.

Maybe I'll dust off my old US Robotics 28.8 Kbaud modem and see if I can remember what the old ride used to feel like. Although, I don't think it will feel quite the same over my VOIP based phone service carried on a 6.5 megabit cable line. And does windows Vista even support the serial port?

You can never really go home again.

Friday, April 11, 2008


Yesterday at work, for the first time in 8 years of using Visual Source Safe, I was confronted with the real prospect of having to attempt to branch a set of Visual Studio .NET projects. Up until now, I have managed to avoid the need, I'm not sure how.

After some lengthy discussions with the development team, I created a Sandbox VSS database and imported a set of VS2005 projects that make of a single solution. I created a VSS directory to contain branches, set up the hierarchy and executed a "Share and Branch" operation for each project in the solution into the branch directory. This worked fine.

We then started looking at the merge functionality. I knew all of this existed but had never used it except for once many years ago when the team I was on tried using the "multiple checkout" mode of VSS (we switched back to exclusive checkout not long after). Our main discovery was that you must merge every file individually. This isn't to say that you have to merge them manually however. While we didn't encounter it this time in the test, my recollection is that VSS will merge automatically unless there is a conflict, then you get the diff/merge dialog to decide what goes. The important point is that there doesn't seem to be a way to merge an entire VSS directory back to it's initial branch in the tree (lacking the "trunk" terminology in VSS). You must initiate each merge operation on each file manually. With 10s of projects in a solution, dozens of changed files and 100s of coexisting files, it is not viable to manually merge every file. It would be faster to use Floppy/Sneaker Net.

So I'm becoming more and more convinced that we are closing in on the end of VSS in our organization. I've been using SVN (Tortoise & Ankh) a bit lately and it seems to work pretty well. SVN certainly has far better branching support.

One of my concerns with any SCM system is how to execute branches within the context of visual studio solutions. If I branch one or more projects then the solution file will need to change, or more practically we'll just create another one for the branch. But then the project files change as well as the physical location of projects change which needs to be reflected in the project references. Then, after completing work on a branch when a project file is merged back into the trunk you have to take care to merge project references back to their proper state.

In addition to all of this, the build system we have been using may encounter issues because of the way the build is executed. However, if the branched project files are committed correctly then it should behave correctly.

I think I need to do some extensive reading on:
A) How branching and merging works best for people
B) How to work with Visual Studio using branched projects