Tuesday, February 24, 2009

A Rant on the State of Web Development (Part 1)

You know... the web world is beginning to surprise me- and not along the lines of how far it has come, but rather along the lines of how far it hasn't. As you may have gathered from my earlier blogs, I'm coming into this web development world fresh with only a minor taste of prior experience ending around 2000... and honestly, where are we now? It seems like we've layered hack (css) upon hack (javascript) upon hack (flash) over an existing system (html) which was clearly never meant to do what it's being forced to do today.

I mean, take any 'modern' website and look at the hodgepodge of overlapping technologies thrown together in an impressively complex manner in order to offer the minimal level of usability and convenience modern websites offer.

Feast your eyes on my old-school ascii art diagram: (which likely took less time then drawing a diagram up in visio, saving it as a gif, uploading it to this blog, and modifying the blog html not to resize it, only to realize I need the diagram at an appropriate level of zoom in order to naturally fit in the blog width, thus necessitating a few more roundtrips just to get it right... you get the point.)


**Data Sources** ** Server Side ** **Client Side**
-------- ---------
----------------- | CSS | | Ajax |
| SQL, Oracle | ---/---- /---------
-----------------\ ----------- ----/---- ----/-------
}-| Asp/Php |-| "HTML"|---|JavaScript|
-----------------/ ----------- --------- -----\------
| Webservices | ----------------
----------------- | Flash/S.Light|
----------------


You've likely got a massive amount of Asp.Net or Php on the server side just to provide the architecture, those technologies are backed by databases and datasources on the back-end, and they barf out "html" on the front end. I put html in quotes because the html is really only there as a legacy and a formality in order to provide a loose structure around the data. Numerous html elements are tagged with class and name attributes for the purposes of supporting css (to be discussed next) and tagged with 'ids' and reference points for the purposes of working with javascript.

Once you've got said data, you then use CSS to "style" the html. This 'styling' includes coloring the html elements, rearranging them (often intentionally ignoring the structure provided by the underlying html), and even hiding some html for the purposes of later unhiding it via javascript.

Javascript is the ultimate hack's delight. With javascript you can completely change any portion of the html/css beast you've generated. Need a last minute fix? Want to move some elements around or hide/unhide them? Want to open up a new window and hide it under the existing window to display an obnoxious ad? Javascript is the answer.

Theoretically, you dont even really need the html or the css in the first place- provided you got the data to the client's browser somehow you could just build it on the fly... which brings me to Ajax.

Ajax is actually pretty darn cool- it's the best misuse of a quirky javascript feature to happen to the web world hands down, and it's the closest thing that exists to where the web is headed... the only problem with Ajax is that it still needs to spit out html, css, and/or even more javascript in order to get its job accomplished. It's difficult to accomplish anything truly revolutionary in ajax without going through enormous efforts in complexity.

And then you have Flash, Silverlight, ActiveX controls (still not dead), and the other stabs at embedding little pieces of flair into the aforementioned technological mess which is the meat of the web. Proprietary quasi-standalone languages developed to be built into a web architecture which didn't really want to have them in the first place.

Looking at so many overlapping layers of legacy adding so much complexity, the obvious answer is to throw it all away in exchange for a solution which is designed to provide rich content from the get-go... and in my next blog I'll take a stab at what I think a better approach would be.

Wednesday, February 11, 2009

My First Dip Into Windows Powershell

One could take an extreme stance and say:

"Automating Repetitive Tasks is the Cornerstone of Productivity"

If not the cornerstone of productivity, it's at least a step away from a life bogged down with the boredom of doing repetitive tasks. In software, there is little productive work accomplished without a surrounding mountain of boilerplate repetitive tasks... and what do we have to fight back against those tasks? Well... scripting, naturally. And the first place a person looks to find a scripting language is at the built in command interpreters of their OS.

The Unix derivatives have a multitude of command interpreters (bash, csh, ...) and hence have a multitude of competing scripting languages... which surprisingly didn't benefit much from the competition as they all have subtly different and uncomfortably cryptic syntax and feature sets. Windows on the other hand had no internal competition, and thus the only command interpreter windows offered was 'cmd' with it's scripting language batch.

Ohhh batch. Anyone who has had to write anything more than the most trivial of dos batch scripts likely hopes never to repeat the experience. Dos batch scripting is so inflexible and outdated that it makes the awkward syntax used by bash and csh seem golden in comparison. Most any useful batch script required very clever hacks and an intimate understanding of the ins and outs of... well, actually, rather than wasting my time complaining about how terrible batch is, let's get to the point of this blog: Windows Powershell.

Windows powershell appears to have been in the dirty hands of the public since sometime in 2006 and perhaps is Microsoft's long awaited replacement for cmd. It's built on top of the .NET framework and, well, the best way I can describe it would be if the .NET framework and Bash got together on a wild night and Powershell was embarassingly delivered by the stork 9 months later. Unfortunately (as is all too common) it appears the stork neglected to bring decent documentation along with its delivery. . . google, here we come.

So I downloaded Powershell 1.0 along with the 'Powershell documentation pack' from microsoft, installed it, read through the 'documentation pack' (which basically just consisted of two 'getting started' documents- nothing comprehensive), played around with a few of the commands, lost interest, and closed it without opening it again until yesterday (two weeks later). My initial impression was that, despite being quite familiar with batch, bash, and the .NET framework, Powershell wasn't as intuitive as I hoped but did seem interesting enough to merit further investigation.

Well, yesterday I updated some css files and wanted to 'deploy' them to the location on my HD where I host my development copy of the company website. By 'deploy' I of course mean the repetitive task of copying the files by hand to their appropriate locations... a task I expected to do numerous times as I tweaked the css pixel by pixel to get everything aligned 'just right.' It didnt take more than two iterations before I got fed up with the repetitive task and decided to write a script to do the task for me. Typically I'd use python for a task such as this... but it seemed too 'easy' for python... all I wanted to do was (1) search a drive for instances of a particular file and then (2) overwrite those file locations with the new version... something so simple seemed like a good test for PowerShell.

The first thing to know about powershell is that the commands work with .NET objects, not strings. What does that mean? Well, to me it means that you can take the return of a function, put a '.' after it, and call GetType().Name to see the class name... and it took a little playing around but I did manage to accomplish that:

PS C:\> (dir C:\AUTOEXEC.BAT).GetType().FullName
System.IO.FileInfo

Sure enough, the dir command returns file info objects. But what if it returns more than one item?

PS C:\> (dir C:\*.*).GetType().FullName
System.Object[]

Interesting... it returns an array of objects... and how do I get at the elements? Trying an standard array operator with a zero index:

PS C:\> (dir C:\*.*)[0].GetType().FullName
System.IO.FileInfo
PS C:\> (dir C:\*.*)[0].Name
CONFIG.SYS

I'm convinced. (Note that PowerShell operates on 'cmdlets' and that 'dir' actually maps to Get-ChildInfo. Try typing "dir alias:" to look at the 'alias drive' and see all of the built in aliases.)

Now I dove into writing a function to do my file copy. The documentation pack's quickstarts gave a few trivial examples for functions, but I needed something more... I couldn't find any good Microsoft documentation (their faq pointed to the powershell blog, and the search function on the powershell blog didn't appear to work (searching for 'function' returned no results)) so I hit the ol' google and landed on "PowerShell Functions and Filters" at powershellpro.com. (Btw, check out the pictures on the front page of powershell.pro.com... quite possibly the cheesiest stock corporate pics ever.)

Ok... with knowledge on how to pass parameters to a function (how could M$'s documentation pack not cover passing parameters to a function?) I could at least write the first line of my function... but then came the question of input validation... how do I validate my required parameters are passed and have valid content? In fact, the primary parameter is the location of the source file which I'm going to use to overwrite the destination files... how can I call System.IO.Path's Exist method to check to see if it exists?

Well, rather than ask the thousand questions I asked while writing my first method, let's dump the contents of my method and discuss the more interesting points...


function deployfile (
[string]$sourcefile=$(throw "Must pass sourcefile param"),
[string]$filter,
[switch]$noprompt,
[switch]$whatif)
{
# Check to see if the source file exists
if ( ![System.IO.File]::Exists($sourcefile) )
{
write-host "The source file $sourcefile doesn't exist";
return;
}

# If they didn't pass in a 'search filter' then let's just use
# the input filename as the file to search for
if ( $filter -eq "" )
{
$filter = [System.IO.Path]::GetFileName($sourcefile);
}

# search the filesystem for instances of the file
$flist = $(Get-ChildItem -recurse -filter $filter)
if ( $flist.Count -eq 0 )
{
write-host "There were no files found";
return;
}

# Iterate over every instance found
foreach ($a in $flist)
{
# if the user didn't ask us not to prompt them, then let's ask if the file
# should be overwritten, if it shouldn't let's continue onto the next file
if ( !$noprompt )
{
$ans = read-host -prompt "Replace File $($a.get_FullName())? (Y/N)";
if ( ($ans -eq "n") -or ($ans -eq "N") )
{
continue;
}
}

# If we're not in pretend 'what-if' mode then actually do the replace,
# otherwise let the user know what we would have done.
if ( !$whatif )
{
write-host "Replacing $($a.get_FullName())";
copy-item -path $sourcefile -destination ($a.get_FullName())
}
else
{
write-host "Would replace $($a.get_FullName())";
}
}
}

Glossing over the takeaways from the above method...

Input Validation
If you qualify the type of a parameter by putting the type in brackets Powershell will ensure you're passed that type. If a parameter is mandatory you add a 'throw' as the 'default value'

[string]$sourcefile=$(throw "Must pass sourcefile param")

And you can further validate your parameters by calling Powershell or .Net functions and then returning if the content is invalid. Note the syntax for calling a static .Net method:

[System.IO.File]::Exists(...)

You put the fully qualified class name in square brackets to get at the object then use :: to get at the internal methods. If you want to create an instance of an object (a stringbuilder for instance) you can use the 'new-object' cmdlet.

The 'if' and 'foreach' syntax I got from the quick reference included with the documentation pack. Everything else about the function seems fairly straightforward. In order to have my function always available I apparently have to stick the code for it in a startup script which gets executed every time powershell starts. To do that I had to create the file:

My Documents\WindowsPowerShell\profile.ps1

And put my function definition in there... how's that for something which would have been nice in the documentation pack?

All in all I can see powershell becoming a command interpreter and scripting language I'll use more often... though it sure will be nice when it's documented well and in one place ;)

Wednesday, February 4, 2009

Great 'Cheat Sheets'

Speaking of xhtml and css, I just discovered the fabulous (and free as in beer) cheat sheets posted at addedbytes. I've already used the CSS sheet roughly 4 times this afternoon in discussions with a coworker concerning the inner workings of CSS. The CSS sheed is well laid out- with a picture illustrating the CSS box model and basically every reasonably common commmand. It doesn't cover text-decoration:blink... but that's also information nobody really should know anyhow...

The site also has a collection of other full-color & useful cheat sheets including a Python cheat sheet (added just Jan 22nd), a JavaScript cheat sheet, as well as a SQL cheat sheet. If you, like me, often find yourself googling to find the syntax or spelling of common elements and commands I'd recommend swinging by and printing out some cheat sheets to add color to the generally drab decor of your workspace...

De-rustifying (x)html & css

Well, honestly, to say that my (x)html & css is rusty is to do an injustice to all things rusty... the last time I really played with html was the before the .com boom, and the last time I played with css was in 2000 when the css hover directive wasn't supported by IE... Oh, wait, what's that? Even today with IE7 the hover tag still is cranky?? (see Getting :hover to Work in IE7) ... Phew, and I thought everything I knew had changed!

So I talked to a couple of coworkers asking for recommendations on how to quickly come up to speed on what's happened since I last visited the land of web development. My coworkers confirmed that any Xhtml & Css classes I tried to take would likely put me to sleep, and also sadly conveyed the knowledge that any books I picked up might be useful as a reference but probably wouldn't be relevant enough to give me the whole picture. The fact of the matter is that most classes and subject books are given for the absolute beginner and assume no prior knowledge of the subject... which makes complete sense given that other than absolute beginners there's no way of knowing exactly how much the target market for your book already knows... which makes it difficult to write the book in the first place.

That rambling paragraph brings me to where I stand now: classes are out and books are out... which only leaves 'ol fashioned diving in and figuring it out using whatever resources I can find on the web. (Isn't that always the answer?) So I decided not to worry about what I didn't know until it got in my way and instead dove head first into my new work project: adding a new asp.net aspx page reusing some existing controls and creating a new control. Simple enough, some copy here, some paste there... then BLAMO- about 2 hours in I found what I needed: a problem involving css which I had no clue how to solve. Necessity being the mother of invention... ok, well, not invention, but at least a google search to find who already invented my solution and I ended up at a nifty little resource called http://www.htmldog.com/guides/cssintermediate/. There it was... a fast loading well designed website with just enough information on each page to keep me clicking without going so far into mundane detail I'd lose interest... well, to be honest I guess I did lose interest for at least long enough to pump out this blog entry... but suffice it to say that I'll be back there soon...

And whadya know... they sell a book ;)

Tuesday, February 3, 2009

Welcome to Tech Shavings

This blog is to capture my hitherto silent experiences as I make my way through the ever-growing and ensnarling technology jungle. Technology complicates our lives just as often as it simplifies it- and likely at a ratio where every minute saved results in ten more minutes spent on something you previously didn't anticipate.

This blog will capture my impressions as I come across technology gems, capture my frustration and hopefully solutions attempting to use those gems, and hopefully be of use and interest to any who follow in my fingersteps. In technology- where there's one, there's many...

This is the blog of a software guy.