Thursday, January 26, 2012

Invalid Viewstate w/ ASP.Net Cross-page Posting

Recently, I had a need to send data from one page to another as part of preview functionality for a web-app. Various factors made it best to send this data as a POST rather than attaching it to as a querystring to send via GET. In every web development environment I worked in prior to ASP.Net, this is as simple as changing the action associated with the form that sends the data (so that it points to the other page instead of posting to itself. The receiving page processes the POST data from an alternate page the same way it would process the POST data from itself (assuming, of course, that you aren't using a form filled with ASP controls, in which case it does get more complicated...*).

When I tried to implement this same model in ASP.Net, it failed, producing a server run-time error stating that the Viewstate of the receiving page was invalid (and explicitly not referencing any of my code). Lovely. After digging in a bit further, it was an easy fix. It turns out that the action on the form needs to remain the source page itself. There are special attributes you can include on an ASP.Net control button which tell the button to act as a submit button targeting a different page than the normal form action. So the fix was as simple as replacing my HTML submit button with this ASP.Net control:

<asp:Button runat="server" PostBackUrl="page.aspx" ID="btnID" Text="Submit"/>

That's a simple solution, and with it in place, everything else seems to work as expected. Regardless, I find it annoying that breaking a basic functionality of HTML is somehow deemed acceptable in ASP.Net, so long as they provide this work-around.

* The more complicated part of dealing with ASP.Net controls is only slightly more complicated. Your current page doesn't have these controls in the Viewstate as it would with a typical ASP.Net postback. Instead, these controls are loaded into a PreviousPage object, which you then reference via PreviousPage.FindControl("id").

Friday, January 20, 2012

Violating Language Specifications

In my previous post about using jQuery to provide default values to text fields, the comments here and on social media got a bit interesting. First, suggestions for improvements came from Erik (which I edited into the post itself). That suggestion revolved around using a CSS class to track which fields should have a default value and then using a custom html attribute to maintain the value that the field should have. This technique dramatically simplifies the implementation. Sounds good, right?

Well, there's one problem. Maybe. Adding a custom html attribute results in markup that violates HTML specs. By employing this technique to make current and future development cleaner and more pleasant, you are technically creating invalid HTML pages. Sounds bad, right?

I'm not so sure. The risk you run by violating specs on a language are pretty straight-forward: There is a significant increase to the chance that a browser somewhere will render your page wrong and/or your code will utterly fail. Further, you may complicate the maintainability of the codebase by confusing future developers with your non-standard practice. Some violations are more likely to be problematic than others.

When I was creating my StackExchange clone, I made the mistake of unintentionally including a <div> inside a <span> element. On most browsers, this was rendered fine. No other developer would be confused by it. But about 1% of the time, Firefox 3.5 (the most common version that hit the site at that time) would drop text-styling from the offending element. So on a page displaying 15 questions, there was a 10 - 15% chance that at least one of the questions would be almost inexplicably styled wrong. Once the cause of the problem was found, it was an easy fix, but it was certainly a pain to find. And in fact the only way I did find the problem was through running the pages through an HTML validator and fixing the resulting violations.

On the other hand, there are definitely reasons that violating spec are worthwhile. HTML spec requires javascript includes to be at the top of the page, in the HEAD. Yet any good developer will put as many of these includes as possible at the end of the file, so that the html is rendered before the javascript is retrieved (as this often results in pages that seem significantly faster to the user). This is a violation that is so common it has become a virtual standard. Many tools for checking page load times specifically search for this factor. Certainly then this violation is one that we all agree should be employed.

In the end, I think it comes down to balancing the dangers of violating a language's specification with the benefits of the violation. If the benefits outweigh the risks, I submit that violating specs is a proper coding practice. What do you think?

Wednesday, January 18, 2012

Quick & Easy jQuery Default Values

I have some more meaty stuff to post in the near future, but those posts will take some time to develop. Instead of going down that path today, I decided to go for the low-hanging fruit of sharing the quick & easy function I threw together to give default values to a client's web app. No long theory discussion today, but if you find this code useful, great. Basically, the function takes as input the ID of the field to which you want to assign a default label, and the text of that label. Then it immediately gives the field that value in a light grey color if there is no existing value. When the user clicks into the field, it blanks out to let the user supply their actual input. When the user leaves the field, it checks the value to see if the field is blank, reassigning it as the label in light grey if so.
function giveGreyLabel(id, label) {
    if ($("#" + id).val() == "") {
        $("#" + id).val(label);
        $("#" + id).css("color", "silver");
        $("#" + id).click(function () {
            if ($("#" + id).val() == label) {
                $("#" + id).val("");
                $("#" + id).css("color", "black");
            }
        });
    }

    $("#" + id).blur(function () {
        if ($("#" + id).val() == "") {
            $("#" + id).val(label);
            $("#" + id).css("color", "grey");
        }
    });
}
A refinement to the above code came out in the comments. Basically, by employing CSS classes for the styles, you can use selectors to do the adjustments. That would look something like:
$('.defaultText').blur(function(){
   var me = $(this);
   if(me.val() === ''){
      me.val(me.attr('default'));
      me.addClass('defaultColor');
   }
});

$('.defaultText').focus(function(){
   var me = $(this);
   if(me.val() === me.attr('default')){
      me.val('');
      me.removeClass('defaultColor');
   }
});
And the HTML to use it would be:
< input type="text" class="defaultText" default="First Name" id="firstname"/>
This code handles the changes for when the fields gain or lose focus, but do not do anything for the initial view. To handle that, you would need to either call the .blur() function for the defaultText class, or you need to employ a third function:
$('.defaultText').each(function(){
   var me = $(this);
   if(me.val() === ''){
      me.val(me.attr('default'));
      me.addClass('defaultColor');
   }
});

Thursday, January 5, 2012

jQuery Autocomplete Via JSON in ASP

On a recent project, we needed to add autocomplete functionality to a few text fields. As I've noted before, in past lives, I would do this through jQuery-UI's autocomplete, with a stand-alone page that does nothing but echo the desired JSON string. Since I'm happy with jQuery's implementation, I decided that the best bet was to go ahead and replicate this functionality in ASP.Net. Turns out, it's pretty darn easy.

First, the change to the client-side/display code: In the code-front (with the HTML), nothing changes. In the Javascript onload function, one line needs to be added:
$("#txtRequestType").autocomplete({ 
   source: "json.aspx", delay: 100 
});
I always set the delay to 100ms because the default of 300ms feels aggravatingly slow to me.

Next, we construct the server-side JSON generation code. This is where I expected to encounter some hang-ups, as I wasn't even sure what type of ASP.Net project item to use to create a page that had only text/JSON content. I played with a few options before coming on a Stack Overflow question that made it trivial to create a text/JSON-only page. The key was to build the JSON string and then use the Response object to write the JSON string and then close it (writing nothing else):
protected void Page_Load(object sender, EventArgs e)
{
  string json = getJsonString();
  Response.Clear();
  Response.ContentType = "application/json; charset=utf-8";
  Response.Write(json);
  Response.End();
}
You could just build your JSON string in that function, but splitting it out to a separate function seems like the correct decision. Plus, it allowed maximum extensibility by using an optional source parameter to specify what should be used to build the JSON string, making this page a multi-purpose one:
protected string getJsonString()
{
   LandLordDataContext db = new DataContext();
   var Values = (from t in db.Table1
   orderby t.Field1
   select t.Field1).Distinct();

   if (Request["source"] == "Table1Field1")
   {
     Values = (from t in db.Table1
               orderby t.Field1
               select t.Field1).Distinct();
   }
   else if (Request["source"] == "Table2Field3")
   {
      Values = (from t in db.Table2
                orderby t.Field3
                select t.Field3).Distinct();
   }

   string strJson = "";
   string currValue = "";
   foreach (var Value in Values)
   {
      currValue = Value.ToString();
      if ((Request["term"] == null) || (currValue.ToLower().Contains(Request["term"].ToLower())))
      { strJson = strJson + ", \"" + currValue + "\""; }
   }
   if (strJson.Length > 2) strJson = strJson.Substring(2);

   string strResult = "[" + strJson + "]";
   return strResult;
}
The final thing worth noting here is that you have to do the filtering on the server-side. That might be counter-intuitive if you've otherwise used a javascript array to interface with jQuery's autocomplete (where it does the filtering automatically). That's where the processing around Request["term"] comes into play (inside the foreach of the final function).

Overall, this took only slightly longer to sort out and set up than an equivalent setup would have taken me on the LAMP stack. But going forward, it will be trivial to repeat in minutes.

Wednesday, January 4, 2012

Liking Linq-to-SQL

Prior to joining Rubicite, I hand-crafted every SQL query that I used in web development. In fact, I often tested my queries for accuracy of results and performance by replacing any variables with set values and then running them through a command-line interface to the database. I took steps to prevent problems like SQL Injection, but didn't really even take that as far as I should have; in relatively few cases did I use parameterized queries (preferring my own functions for white-list input validation, filtering, and testing variables in an SQL query). Part of this was based on the issues I have seen in past code from coworkers that was overly reliant on database frameworks, resulting in queries that retrieved millions of records to display a couple dozen entries.

As I moved into the ASP.Net world, there was a temptation to stick with what I've always done. The support for parameterized queries is more integrated into the base language structure, and initially I assumed that would be enough to satisfy me. Others at Rubicite disagreed, and pushed me to learn how to use LINQ to SQL. Now, I am glad they did.

I am not going to go over the details of the Linq-to-SQL methodology and implementation. I'll touch on a few highlights while discussing the things I like about it, but for full details check out Scott Gu's 9-part tutorial.

Basically, the components of a Linq-to-SQL implementation include a database model that is loaded into your .Net project, coupled with LINQ queries in your code. LINQ queries look rather different than a typical SQL query, but once you wrap your mind around the structure, they are just as easy to craft. Here's an example:

Rewriting that query in Linq-to-SQL looks like:


As you can see, the two are pretty similar. I included more than just the LINQ query there to illustrate that you must define and reference a datacontext on which to base the query. That comes from the database model you loaded into the project. I left out any protections from the SQL version, but clearly if you were using this query in real life, you would want to employ some of the techniques I mentioned in my first paragraph.

So, why would I choose the LINQ version over the SQL version? On the surface, the two look pretty similar. All things being equal, most would probably stick with the SQL version as it is more likely to be familiar. I certainly would. Except that all things aren't equal. You see, in the SQL version, there's the matter of implementing the security techniques noted above. That means wrapping the query with some kind of data structure or package to do the execution, and then adding logic and/or parameterization code to ensure we aren't opening ourselves up for a world of hurt. In the LINQ query, there is no need for a separate data structure. Instead, LINQ integrates into the base language to enable strong typing directly in the query itself. That means that if the type of `t.ValueField` does not match the type of `targetValue`, then the code will fail to complie. It knows these types because it is running directly in .Net and has loaded the types from the database into the datacontext. This is a big win.

Another big win is the simplicity of dealing with the data afterwards. Rather than having an obscure resultset with a generic set of columns that may or may not actually exist, the result of a LINQ query is an object (or, rather, a collection of objects). Each column of the results is automatically built into the resulting object as a parameter. So in later code, I could access the ValueField of the first result object through `Results.first().ValueField`. Types are preserved in these objects, improving the interaction in the related code quite a bit over what you would have in a generic resultset. Certainly, you could build your own object to create from a generic resultset of an SQL query, and that would provide you many of the same benefits, but there's no comparison to the speed and ease of use that the LINQ method gives you. And should you prefer to have a custom object for the LINQ query (to provide custom methods, perhaps), you can easily do that as well.

The two drawbacks I have found to LINQ are handling of complex queries and requirement to keep the database model up-to-date. Really complex queries can result in a mess of a query, and might even give give some performance issues. The solution I use is to move queries of this complexity to stored procedures, which is probably the right thing to do in a traditional SQL approach as well. Keeping the database model up-to-date is an extra step required any time you create a new table or alter the structure of an existing one. It is a simple as a few mouse-clicks to delete the old model of the table and add the new one(s) in. Definitely worth it for all the other benefits.