Yearly Archives: 2004

Site Changes

I’ve made a bunch of changes to this site, with the most obvious being the addition of a sidebar for links and other useful tidbits. Under the hood, the following changes were made:

  • I’m now using Scott Van Vliet’s Search Engine Safe URL utitlity. It’s a really nice little drop-in that automatically re-writes URLs based on your query string parameters. Now, instead of seeing a URL like Default.aspx?Section=Home, you’ll see Default.aspx/Section/Home.
  • I wrote an OPML server control that translates an opml file into html. You can see this in action on the sidebar.
  • Major backend fixes and enhancements to TestostelesWeb, so I can now specify sort order for navigation links, and nest sections. The admin section was totaly re-done as well, and I’m using XStandard to generate content instead of FreeTextBox. Nothing against FreeTextbox, but XStandard guarantees strict xhtml output, which is crucial. I first found out about XStandard on Milan’s excellent AspNetResources site.
  • The main feed for the site is now generated using NPlanet, which is a little feed aggregator I put together a little while ago that uses RSS.NET to combine a list of RSS feeds. NPlanet has since been rolled into TestostelesWeb.
  • I’m using output caching now. Can’t believe I wasn’t doing this earlier.

Additionally, I’ve tried to clean up my markup, based on the things I’ve learned since the last version of the site was completed in June. There’s also a nice stylesheet for printing. Comments are welcome!

Desktop Search, Fixing the Problem From the Ass End

Hype. That’s all “Desktop Search” (the latest cool thing buzzing around the Internet) really is. I’ve installed and used Copernic, MSN Desktop Search, and Google’s Desktop Search products, and they’re really only half way there, because they don’t solve the real problem: Why do our files get lost in the first place?

I don’t want to need to know where or how my documents and files are saved, I should just be able to tell the system what I’m looking for, and have it appear. I should be able to save a “Search Folder” into My Documents that shows me all the files, emails, contact info, etc, from or about Jim Smith. I shouldn’t have to type “Jim Smith” into a dinky little toolbar icon everytime I need to see these files. And how do these search tools know that Jim was the guy that sent me that great nude picture of Pamela Anderson I have as my desktop background? I’m pretty sure they don’t. That’s the kind of stuff I want, not some stupid collection of toolbars showing me search results in a web browser. Yippee.

The sad part is we’re so close. E-Mail clients like Evolution have had “Virtual Directories” for years! The search technology is basically there, so why not go the extra step and actually integrate it into the OS (and no, the toolbars don’t count).

Page Controller Pattern in ASP.NET

The Page Controller pattern is perhaps the simplest way to add MVC to your web apps.

A Page Controller is one object or file declaration designed to handle the request for one logical web page or action.

That’s the simplest way to explain it. So how exactly does it map to ASP.NET? Quite easily, as it turns out:

  • Model: Your business class (ie; a User class)
  • View: A UserControl
  • Controller: Your code-behind class for the requested page

From Martin Fowler’s Patterns of Enterprise Application Architecture:

The basic responsibilites of a Page Controller are:

  1. Decode the URL and extract any form data to figure out all the data for the action.
  2. Create and invoke any model objects to process the data. All relevant data from the HTML request should be passed to the model so that the model objects don’t need any connection to the HTML request.
  3. Determine which view should display the result page and forward the model information to it.

This sounds simple enough, and in fact it is. So how does it work, exactly? Well, let’s say we want to display the information regarding a specific User within our system. We’ll first need to have a User class, with it’s requisite properties (ie; FirstName, LastName, etc). We’ll also need a UserControl to display the user’s details. This UserControl will have a property called BoundEntity, which holds an instance of the User class it is to display. The job of the Page Controller (our .aspx code-behind file) is to figure out which user needs to be displayed, fetch their details from the data store, pass the User to the UserControl, and then load the UserControl for display. Here’s how it could look:

private void Page_Load(object sender, System.EventArgs e)
{
     LoadEntity();
}
 
private void LoadEntity()
{
     User user = new User();
     if( this.RequestedEntityID > 0 )
     {
           user.ID = this.RequestedEntityID;
           BrokerFactory.Fill( user );
      }
      EntityControl cont = 
(EntityControl)LoadControl( "EditStaff.ascx" );
      cont.BoundEntity = user;
      this.phEntity.Controls.Add( cont );
}

So, as you can see, we’ve grabbed the id from the querystring (through the call to RequestEntityID, which is a property of the base page class), and if the ID is greater than zero, we load the User from the database. We then create the desired View by instaniating the correct UserControl, assign the User object to the control, and then finally add the UserControl to a PlaceHolder control on the actual aspx page.

Search/Virtual Folders

I came to a realization the other day, regarding how I use my Windows filesystem for storing documents. Put quite simply, I don’t. The main reason being that it’s just plain awkward and lame to manage. Why? Here’s a few simple use cases:

  1. Saving a document received in an e-mail:
    • Click on the attachment, and save it to my desktop
    • Open the document from my desktop. Read it.
    • Decide that it’s worth keeping around. Make a mental association regarding what category it falls under.
    • Browse to My Documents. If a folder mapping to the document’s category exists, drag the document from my desktop to that folder. Otherwise, create a new folder and drag it there.
  2. Search for a document I think I’ve previously saved
    • Remember that I’ve previously saved a document related to what I’m interested in.
    • Browse to My Documents, scan the folder for sub-folders that might contain the document.
    • If I find a proper match, enter that folder and look around for a document with a title that might match what I’m looking for.

In each of these use cases, there’s a whole lot that can go wrong. I can wrongly classify a document, and place it in the wrong folder. The name of the document may not accurately describe it’s content, in which case I have to re-name it. There’s about a million other reasons why I will probably not be able to find this document at a later date, when I really need it.

Now, back to my realization… For months now, I’ve actually been using my e-mail program (Thunderbird) as a file-system. If someone wants me to have a document, I just tell them to e-mail it to me because I can easily use Thunderbird’s built in search to find it again later. I can search on a whole bunch of contextual data that the document doesn’t actually contain, such as who sent it to me, when it was sent, etc. And I can do this quickly. Searches in Thunderbird take only a few seconds at most, unlike the built-in Windows search, which licks donkey’s balls. If you use Outlook, there’s an add-on called Lookout that can do the same thing (plus a bit more). In my mind, it’s a pretty sad state of affairs.

There’s quite a few concepts out there that would help me out if they only worked now, and on the Windows filesystem, including WinFS (not due for a long time, if ever), Virtual/Search folders in Evolution, Outlook, and Thunderbird, and Smart Playlists in iTunes and WMP. I basically want to be able to have Virtual folders on my hard-drive, that represent queries on the documents that it contains. For example, I want to have folders containing all recent work-related documents, that is automatically updated whenever I save a new document, wherever it may actually live on my disk

This isn’t exactly a new concept or request, since it exists almost everywhere except where it’s most useful. The lads at Novell have been working on an app called Beagle for a while now, that does exactly what I’m asking, but on linux. They’re using DotLucene to index documents, which is an open-sourced search tool written in C# based on the Java project called Lucene. I just wish there was something like it to try out on Windows. If there is, let me know about it.

NPlanet Re-Visited

Well, it’s now got a home of its own:

NPlanet homepage

There’s not much there… The site itself is running NPlanet, but there’s no documentation or anything as of yet. You basically just change the Planet.xml file anyways, and the syntax should be pretty self-evident. The default layout is composed of semantic markup, so you should be able to personalize everything from the CSS file.

The only problem I’m having is with caching the page output. Ideally, you don’t want to be hitting each and every feed with every page load. I’ve tried to setup ASP.NET output caching to cache the feed, however it doesn’t seem to be working. Help in that regard would be awesome. Anyways, feedback is welcome.

NPlanet, Your RSS Feed Aggregator

I spent a good chunk of time last night attempting to create a .NET version of the wonderful Planet feed aggregator. For those of you who don’t know what Planet is, it’s basically a Python based app that aggregates the feeds of many into one RSS feed. I think the goal behind it is to create communities or ‘Planets’ of blogs written by like-minded individuals. It differs in concept from applications like .Text (upon which this blog is based) in that it aggregates any feed from any location, instead of just containing a bunch of blogs all within one application and serving them from the same location. Think of it as a mini-Bloglines

Anyways, the code seems to work fine, although it doesn’t do Atom just yet (I’m using RSS.NET to grab all the feeds, which doesn’t support Atom). I’ve now just got to set up a webproject with all the simple default layout stuff taken care of so people can download the project and just start aggregating. The best part is there’s no database.

The feed parser has been merged into my TestostelesWeb package, and the web project will use TestostelesWeb’s nifty RssControl to display the aggregated feeds. It’s all starting to come together nicely. Give me a couple of days to get a package out.

The Way ASP.NET Validation Should Work

One of the really cool features of ASP.NET are the built in validation controls. For the most part, they work well, even though they have some issues. One of the things that bugs me about them is that they don’t scale particularly well. It becomes a pain to manage them when you have to go into each and every page to change their behaviour or look ‘n’ feel. Enter the following article, just posted on MSDN:

Dynamic Creation of Validation Controls

In my mind, this is the way the validators should have worked from the very beginning.

Ok, So This Is Weird

Picture me in the basement of my house… I’ve just finished putting my dirty unmentionables into the washing machine. The lone fluorescent light flickers. I trip on something rough… it rattles. Looking down, I see a medium sized burlap sack blocking half of the hallway. Hmmm… I wonder what’s inside? With a strange anticipation, mixed with more than just a little trepidation, I open the burlap sack.

Walnuts?!?

You’ve got to be friggin’ kidding me.