Thursday, July 12, 2012

Stirrings of an RTS Game

The main goal at Rubicite is for us to keep moving in a financially sound direction as a software development company. For the foreseeable future, that means focusing a large majority of our efforts on development for paying clients. However, at our core, we want to be a game development company. To that end, one of our partners spends the majority of their time pushing forward on our first game - a third-person shooter in which an advanced race of snails battles against several enemies. You'll definitely hear more on that as it moves forward - eventually we will be posting it on Kickstarter to get sufficient funding to commit more resources. But that's not what I'm here to talk about today...

The snail game is our first game. With any luck, it will see a 2013 release. As exciting as it is, I'm much more excited for our second game. By all counts, it will probably be a 2015 release, but plenty of things could happen to move that forward or backward between now and then. Currently, it's in the early concept stages, and it's becoming my baby. Here's a bit about what we know:

The gameplay style will be modeled after Myth and Myth2 (no, not Myst): a couple of games developed by Bungie in the late 90's (prior to that whole Halo thing). That means it will be a real-time strategy game with viritually no resource-management, where the driver for the gameplay is using combat tactics with a relatively small number of units and a realistic physics engine. This means that the terrain you choose for your battles will matter. Having a height advantage will impact the performance of your ranged units. Formations and micro-management will turn the tide of battle between two otherwise equal forces.

Currently, we are quite a ways from implementing any of that. Like I said, early concept stages. What I'm slowly developing now is the backstory for the game. Which is both driven by the unit classes I envision, and drives me to create new classes to fit the story. Both, in turn, will drive the level design that will be central to many elements of the gameplay.

The story is centered around a post-apocalyptic Earth. I haven't decided for sure what caused the apocalypse, but I'm leaning heavily toward a series of comet/asteroid collisions. In the time between the apocalypse event and when the game starts, months (or years) have passed. A shortage of resources has led to civilization breaking down, with many small groups forming and warring with each other. Many have died since, but some groups are pushing for a brighter future. Much of the world's ammo and weaponry were rendered useless during the destruction from the apoc event. The majority of what is left has been exhausted from the wars. Meaning that where war is concerned, humanity has largely returned to a time of modern versions of medieval weapons. Guns and other more modern weaponry still exist, but ammo is in such short supply that they are rare.

Oh, and there are some non-human creatures (not in the way you're thinking) with unique anatomy that will have meaningful impact to both story and gameplay.

The major story element I'm currently struggling with is how to explain a way that these creatures can communicate with humans.

Once it's all said and done, I expect to end up with a novel that mirrors the story of the game. I look forward to releasing it, almost as much as I look forward to the game itself.

Thursday, June 7, 2012

Paypal IPN and PDT in ASP.net

To support a customer who needs to process payments through Paypal, I created a small code package that encapsulates the functionality necessary for the operations used in Paypal's Payment Data Transfer (PDT) and Instant Payment Notification (IPN). This way, the code is reusable for future projects within Rubicite. I'm sharing it on Github, so that it can save some time for the rest of the world: https://github.com/jgb146/Paypal-util

Most of the functionality involved is derived from various tutorials in Paypal's documentation, but this way everything is encapsulated into a single pdt_hander object, or a single ipn_handler object (depending on which technology you're using). The resulting object automatically takes the steps required for confirming the contents with Paypal, and then provides you with access to your payment information through a Dictionary.

To use either component, simply instantiate the object and provide the required information. For PDT, that means including the transaction ID that Paypal sent, your PDT ID token, and an optional boolean to indicate whether or not you are using Paypal's sandbox. For IPN, the only parameter to include is an optional boolean to indicate whether or not you are using Paypal's sandbox. Use looks roughly like this:

IPN:
protected void Page_Load(object sender, EventArgs e)
{
    processIpnPost();
            
    Response.Clear();
    Response.StatusCode = 200;
    Response.Write("");
    Context.ApplicationInstance.CompleteRequest();
}

public void processIpnPost()
{//process the IPN post
    
    //use ipn_handler util to get response and load into a dictionary
    PayPalUtil.ipn_handler ipn_handler = new PayPalUtil.ipn_handler();
    Dictionary dic_ipnResponse = ipn_handler.dic_data;
    
    //deal with invalid responses
    if (dic_ipnResponse["response"] != "VERIFIED")
        return;

    //---Insert code to use dic_ipnResponse to process the transaction
}

PDT:
protected void Page_Load(object sender, EventArgs e)
{
    ProcessTransaction();
}

protected void ProcessTransaction()
{
    //deal with unspecified transactions
    if ((Request["tx"] == null) || (Request["tx"] == ""))
    { 
        InvalidTransaction();
        return; //stop processing; 
    }
                
    //use the transactionID in the GET to request & process PDT data
    my_pdt_handler = new PayPalUtil.pdt_handler(Request["tx"], PDT_ID_Token);

    //deal with invalid responses
    if (my_pdt_handler.dic_data["response"] != "VERIFIED")
    { 
        InvalidTransaction(Request["tx"], my_pdt_handler.dic_data);
        return; //stop processing; 
    }

    //---Insert code to use my_pdt_handler.dic_data to process the transaction           
}

protected void InvalidTransaction(String pStr_transactionID = "", Dictionary pDic_data = null)
{//invalid transaction -> handle however you feel is best

}

Tuesday, June 5, 2012

New Directions & New Digs

Things at Rubicite have been a bit tumultuous of late. One of our partners (we'll call him X) decided to leave the company. X and our CEO are brothers, and family drama entered into work on more than one occasion - something for which they were both at fault. In the end, X chose to leave because the stressful conditions that this created weren't worth it in his eyes. All parties were happy and pleasant about the break; we took X out to lunch at his favorite restaurant (a tradition I enjoyed at NSA, and one I intend to continue here with Rubicite).

A few things will be happening as a result of this. First, Rubicite is buying X's shares. That's creating some difficulty because X held around 35% of the company and a large chunk of our financial assets are in a set of invoices that haven't yet been paid. In the end, he's going to have to wait a few weeks for the final payout. And we're going to be strapped for cash for a few weeks until some invoices get paid.

Second, the remaining partners all have more ownership of Rubicite now. X's shares are being split proportionally to our previous distribution, which takes me from 15% to around 23%.

Third, I get X's office. That was going to happen anyway, since X worked from home a lot. But the timing worked out pretty well, since I just got my new desk delivered and it was going to make my old office pretty crammed. The new office is bigger than my old one (around 14 x 11 instead of 11 x 10), and it has two windows. In the new office, the desk fits comfortably, with room for a nice seating area should I need it.

Fourth, Rubicite's finances should improve more quickly. For at least the last six months, X has been working on internal projects that did not make us money right now. Among other things, those projects included coding for our first computer game, and improvements to Grinderschool. Both were valued contributions, but having a large portion of our work-force not on paying work led to slower growth than we might have seen otherwise. Instead, we will now see slower improvements to Grinderschool and the game will likely take a bit longer to get to the point of seeking funding, BUT we should be making money significantly more quickly than we are spending it - meaning salary increases should be a bit less distant on the horizon.

Finally, the rest of us will have to adjust our work in order to keep moving forward on the projects X was working on. We want to do so in a way that will lead to minimal drain on our billable time. That will probably mean some extra hours for each of us here and there, and it will also probably mean that X's projects slow down a bit. I'm becoming a bit excited at the prospect of getting my hands into the mix for our first game - previously I had envisioned myself as being more-or-less only in the web development side of Rubicite.

We are having a company meeting later this week to look at finances and develop the future work-plan for Rubicite. More will certainly be decided then.

Monday, May 21, 2012

Serving Unlimited Domains From IIS, pt 2

Last time, I posed the question of how to serve unlimited domains from IIS. I got to digging around, found some options myself, and had some better options revealed to me by folks in the Tulsa WebDevs Facebook group and by my partners here at work. So here are the options, as I currently understand them:

1) Build It Into The Code
Probably the best option is to programmatically create bindings in IIS6 and/or in IIS7. This way everything is integrated into the webapp, meaning there's no muss or fuss outside of the app. It requires a bit more work in the app itself, but the benefits of keeping things clean and keeping all the functionality around this action inside the single codebase are almost definitely worth it.

2) PowerShell
Another option is to set up a script for powershell to have it handle this stuff based on the script detecting changes to the database. This would work well also, but has the drawback of creating two codebases to maintain.

3) Remove Domain Bindings
This StackOverflow answer led me to try removing the existing domain from the webapp's bindings in IIS. Making this change resulted in being able to reach my webapp by just visiting the IP address (so the binding was no longer an issue). And the one domain we have set for this webapp so far still reached the desired site as well. So it seems that the solution could be as simple as to have no host/domain listed in the bindings on IIS. As long as only one site does this, all traffic that does not match another binding loads that site. A big upside here is that it takes less time/effort than any of the coding solutions mentioned above. The downside is that you can only have one site on the server perform this way, and you can no longer have the server locked to only serving content with recognized domains.

Friday, May 18, 2012

Serving Unlimited Domains From IIS, pt 1

In an upcoming version of a currently-in-development webapp, I need to serve multiple domains from a single site. The code on the site will recognize the individual domains and vary the content accordingly. I do not know all of the domains that we will be serving, as clients can add new domains to their site. The coding parts, I know how to do - when clients add a domain, there will be a corresponding entry into our database and that will act as a key to control which set of content is shown.

The thing is, I suck at system administration. If I knew the domains ahead of time, I could simply point them to our server's IP and then create bindings in IIS to handle each. But since I do not know the domains ahead of time, I'm rather at a loss. I welcome any suggestions. Expect part 2 of this entry in the coming weeks, to reflect whatever solution I find.

Thursday, February 16, 2012

DotnetOpenAuth & Twitter

This is the first in a series of posts about using DotnetOpenAuth to provide authentication from external services. Today, we're talking about Twitter.

The first thing you need to do is to create an app with Twitter. Visit https://dev.twitter.com/apps, login, and then click "Create a new application". Fill out the simple form. Use any valid website for your site and for your callback URL - preferrably something for your specific organization or project, but if you don't have anything, just put down anything. The important thing is to make sure you fill it all out. After your app is created, you will see a details screen that includes a section of "OAuth settings". The import items for step 2 are the Consumer Key and the Consumer Secret.

Next, you need to include the Consumer Key/Secret into your .Net application. That means including the relevant data as part of your Web.config. Something like this (note that the key/secret here are for the DotNetOpenAuth sample test project; it'll look pretty silly if you forget to replace these with the values of your actual project:
<appSettings>
 <!-- Fill in your various consumer keys and secrets here to make the sample work. -->
 <!-- You must get these values by signing up with each individual service provider. -->
 <!-- Twitter sign-up: https://twitter.com/oauth_clients -->
 <add key="twitterConsumerKey" value="eRJd2AMcOnGqDOtF3IrBQ" />
        <add key="twitterConsumerSecret" value="iTijQWFOSDokpkVIPnlLbdmf3wPZgUVqktXKASg0QjM" />
  </appSettings>

Finally, we can get to coding the app. Your first step is to download the DotNetOpenAuth package. There are several different versions available, but at the time of this article the best one (read: only one I could make work for Twitter, Facebook, and OpenID) is version 3.5.0.x. Grab this, and play around with it as much as you like. Or just yank some of the dll's and move on with doing your actual project. The ones you are interested in are DotNetOpenAuth.dll and DotNetOpenAuth.ApplicationBlock.dll. They can both be found in \Samples\OAuthClient\bin\ of the repository you downloaded. Load these in as project references to your .Net app. My experience is that this works best if you place the dlls into the \bin\ folder of your app and then right-click on the project in Visual Studio to add the references.

Once you have the references set up, design the frontend. Set it up however you would like, but the key is to provide some kind of link that will prompt people to sign-in via Twitter. You can make your own, or use one of the ones that Twitter provides for you. It is the clicking of that link to which you want to attach some processing.

On the backend of your app, start off by including everything you need. Cheat by replicating the includes from the sample project (while you may not need all of them, it's easiest to start with everything and then remove the ones you know you do not need):
using System;
 using System.Collections.Generic;
 using System.Configuration;
 using System.Linq;
 using System.Web;
 using System.Web.Security;
 using System.Web.UI;
 using System.Web.UI.WebControls;
 using System.Xml.Linq;
 using System.Xml.XPath;
 using DotNetOpenAuth.ApplicationBlock;
 using DotNetOpenAuth.OAuth;

Next, include the actual login stuff in the page load:
if ((Request["openid_identifier"] == "http://twitter.com/")|| ((Request["oauth_token"] != null) && (Request["oauth_token"].Length > 1)))
{
    if (TwitterConsumer.IsTwitterConsumerConfigured)
    {
        if (IsPostBack)
        { TwitterConsumer.StartSignInWithTwitter(true).Send(); }
        else
        {
            string screenName;
            int userId;
            if (TwitterConsumer.TryFinishSignInWithTwitter(out screenName, out userId))
            {
                //userId is now a unique numeric id for the user
                //screenName is now their Twitter username

                //use these values to do whatever you want
            }
        }
    }
}

After that, you're done. Cheers, you now have their information from Twitter. Use it however you like!

Tuesday, February 14, 2012

Hiatus: Rubicite Buys Grinderschool

I've been on a blogging hiatus because I've been on a programming hiatus. The last couple of weeks have been spent doing marketing stuff to get new clients and new investors AND dealing with all of the issues that arise during a business acquisition. I've been on both sides of that acquisition as Rubicite Interactive purchased Grinderschool, effective February 1, 2012.

In the long run, this will mean a lot of good things for Grinderschool, and it will mean the addition of a profitable business line for Rubicite. On the Grinderschool side, the site has long suffered from my lack of time - I simply couldn't fit it into my schedule to give the site as much attention as it needed in order for the site to grow. Now that part of my day job is keeping the site running, that will not be an issue. Further, the development needs of the site are being spread across the staff here at Rubicite instead of falling strictly on me. That means good things for the site, as it ultimately means that development on "nice-to-have" improvements will actually happen instead of just adding to the growing list of things I'd try to tackle eventually.

Anyway, that's where I've been. But now I'm getting back into the swing of things on the development side. Expect to hear more from me soon!

Edit: As of December, 2012, Grinderschool is no longer part of Rubicite Interactive. It is back under the umbrella of Interware Innovations.

Thursday, January 26, 2012

Invalid Viewstate w/ ASP.Net Cross-page Posting

Recently, I had a need to send data from one page to another as part of preview functionality for a web-app. Various factors made it best to send this data as a POST rather than attaching it to as a querystring to send via GET. In every web development environment I worked in prior to ASP.Net, this is as simple as changing the action associated with the form that sends the data (so that it points to the other page instead of posting to itself. The receiving page processes the POST data from an alternate page the same way it would process the POST data from itself (assuming, of course, that you aren't using a form filled with ASP controls, in which case it does get more complicated...*).

When I tried to implement this same model in ASP.Net, it failed, producing a server run-time error stating that the Viewstate of the receiving page was invalid (and explicitly not referencing any of my code). Lovely. After digging in a bit further, it was an easy fix. It turns out that the action on the form needs to remain the source page itself. There are special attributes you can include on an ASP.Net control button which tell the button to act as a submit button targeting a different page than the normal form action. So the fix was as simple as replacing my HTML submit button with this ASP.Net control:

<asp:Button runat="server" PostBackUrl="page.aspx" ID="btnID" Text="Submit"/>

That's a simple solution, and with it in place, everything else seems to work as expected. Regardless, I find it annoying that breaking a basic functionality of HTML is somehow deemed acceptable in ASP.Net, so long as they provide this work-around.

* The more complicated part of dealing with ASP.Net controls is only slightly more complicated. Your current page doesn't have these controls in the Viewstate as it would with a typical ASP.Net postback. Instead, these controls are loaded into a PreviousPage object, which you then reference via PreviousPage.FindControl("id").

Friday, January 20, 2012

Violating Language Specifications

In my previous post about using jQuery to provide default values to text fields, the comments here and on social media got a bit interesting. First, suggestions for improvements came from Erik (which I edited into the post itself). That suggestion revolved around using a CSS class to track which fields should have a default value and then using a custom html attribute to maintain the value that the field should have. This technique dramatically simplifies the implementation. Sounds good, right?

Well, there's one problem. Maybe. Adding a custom html attribute results in markup that violates HTML specs. By employing this technique to make current and future development cleaner and more pleasant, you are technically creating invalid HTML pages. Sounds bad, right?

I'm not so sure. The risk you run by violating specs on a language are pretty straight-forward: There is a significant increase to the chance that a browser somewhere will render your page wrong and/or your code will utterly fail. Further, you may complicate the maintainability of the codebase by confusing future developers with your non-standard practice. Some violations are more likely to be problematic than others.

When I was creating my StackExchange clone, I made the mistake of unintentionally including a <div> inside a <span> element. On most browsers, this was rendered fine. No other developer would be confused by it. But about 1% of the time, Firefox 3.5 (the most common version that hit the site at that time) would drop text-styling from the offending element. So on a page displaying 15 questions, there was a 10 - 15% chance that at least one of the questions would be almost inexplicably styled wrong. Once the cause of the problem was found, it was an easy fix, but it was certainly a pain to find. And in fact the only way I did find the problem was through running the pages through an HTML validator and fixing the resulting violations.

On the other hand, there are definitely reasons that violating spec are worthwhile. HTML spec requires javascript includes to be at the top of the page, in the HEAD. Yet any good developer will put as many of these includes as possible at the end of the file, so that the html is rendered before the javascript is retrieved (as this often results in pages that seem significantly faster to the user). This is a violation that is so common it has become a virtual standard. Many tools for checking page load times specifically search for this factor. Certainly then this violation is one that we all agree should be employed.

In the end, I think it comes down to balancing the dangers of violating a language's specification with the benefits of the violation. If the benefits outweigh the risks, I submit that violating specs is a proper coding practice. What do you think?

Wednesday, January 18, 2012

Quick & Easy jQuery Default Values

I have some more meaty stuff to post in the near future, but those posts will take some time to develop. Instead of going down that path today, I decided to go for the low-hanging fruit of sharing the quick & easy function I threw together to give default values to a client's web app. No long theory discussion today, but if you find this code useful, great. Basically, the function takes as input the ID of the field to which you want to assign a default label, and the text of that label. Then it immediately gives the field that value in a light grey color if there is no existing value. When the user clicks into the field, it blanks out to let the user supply their actual input. When the user leaves the field, it checks the value to see if the field is blank, reassigning it as the label in light grey if so.
function giveGreyLabel(id, label) {
    if ($("#" + id).val() == "") {
        $("#" + id).val(label);
        $("#" + id).css("color", "silver");
        $("#" + id).click(function () {
            if ($("#" + id).val() == label) {
                $("#" + id).val("");
                $("#" + id).css("color", "black");
            }
        });
    }

    $("#" + id).blur(function () {
        if ($("#" + id).val() == "") {
            $("#" + id).val(label);
            $("#" + id).css("color", "grey");
        }
    });
}
A refinement to the above code came out in the comments. Basically, by employing CSS classes for the styles, you can use selectors to do the adjustments. That would look something like:
$('.defaultText').blur(function(){
   var me = $(this);
   if(me.val() === ''){
      me.val(me.attr('default'));
      me.addClass('defaultColor');
   }
});

$('.defaultText').focus(function(){
   var me = $(this);
   if(me.val() === me.attr('default')){
      me.val('');
      me.removeClass('defaultColor');
   }
});
And the HTML to use it would be:
< input type="text" class="defaultText" default="First Name" id="firstname"/>
This code handles the changes for when the fields gain or lose focus, but do not do anything for the initial view. To handle that, you would need to either call the .blur() function for the defaultText class, or you need to employ a third function:
$('.defaultText').each(function(){
   var me = $(this);
   if(me.val() === ''){
      me.val(me.attr('default'));
      me.addClass('defaultColor');
   }
});

Thursday, January 5, 2012

jQuery Autocomplete Via JSON in ASP

On a recent project, we needed to add autocomplete functionality to a few text fields. As I've noted before, in past lives, I would do this through jQuery-UI's autocomplete, with a stand-alone page that does nothing but echo the desired JSON string. Since I'm happy with jQuery's implementation, I decided that the best bet was to go ahead and replicate this functionality in ASP.Net. Turns out, it's pretty darn easy.

First, the change to the client-side/display code: In the code-front (with the HTML), nothing changes. In the Javascript onload function, one line needs to be added:
$("#txtRequestType").autocomplete({ 
   source: "json.aspx", delay: 100 
});
I always set the delay to 100ms because the default of 300ms feels aggravatingly slow to me.

Next, we construct the server-side JSON generation code. This is where I expected to encounter some hang-ups, as I wasn't even sure what type of ASP.Net project item to use to create a page that had only text/JSON content. I played with a few options before coming on a Stack Overflow question that made it trivial to create a text/JSON-only page. The key was to build the JSON string and then use the Response object to write the JSON string and then close it (writing nothing else):
protected void Page_Load(object sender, EventArgs e)
{
  string json = getJsonString();
  Response.Clear();
  Response.ContentType = "application/json; charset=utf-8";
  Response.Write(json);
  Response.End();
}
You could just build your JSON string in that function, but splitting it out to a separate function seems like the correct decision. Plus, it allowed maximum extensibility by using an optional source parameter to specify what should be used to build the JSON string, making this page a multi-purpose one:
protected string getJsonString()
{
   LandLordDataContext db = new DataContext();
   var Values = (from t in db.Table1
   orderby t.Field1
   select t.Field1).Distinct();

   if (Request["source"] == "Table1Field1")
   {
     Values = (from t in db.Table1
               orderby t.Field1
               select t.Field1).Distinct();
   }
   else if (Request["source"] == "Table2Field3")
   {
      Values = (from t in db.Table2
                orderby t.Field3
                select t.Field3).Distinct();
   }

   string strJson = "";
   string currValue = "";
   foreach (var Value in Values)
   {
      currValue = Value.ToString();
      if ((Request["term"] == null) || (currValue.ToLower().Contains(Request["term"].ToLower())))
      { strJson = strJson + ", \"" + currValue + "\""; }
   }
   if (strJson.Length > 2) strJson = strJson.Substring(2);

   string strResult = "[" + strJson + "]";
   return strResult;
}
The final thing worth noting here is that you have to do the filtering on the server-side. That might be counter-intuitive if you've otherwise used a javascript array to interface with jQuery's autocomplete (where it does the filtering automatically). That's where the processing around Request["term"] comes into play (inside the foreach of the final function).

Overall, this took only slightly longer to sort out and set up than an equivalent setup would have taken me on the LAMP stack. But going forward, it will be trivial to repeat in minutes.

Wednesday, January 4, 2012

Liking Linq-to-SQL

Prior to joining Rubicite, I hand-crafted every SQL query that I used in web development. In fact, I often tested my queries for accuracy of results and performance by replacing any variables with set values and then running them through a command-line interface to the database. I took steps to prevent problems like SQL Injection, but didn't really even take that as far as I should have; in relatively few cases did I use parameterized queries (preferring my own functions for white-list input validation, filtering, and testing variables in an SQL query). Part of this was based on the issues I have seen in past code from coworkers that was overly reliant on database frameworks, resulting in queries that retrieved millions of records to display a couple dozen entries.

As I moved into the ASP.Net world, there was a temptation to stick with what I've always done. The support for parameterized queries is more integrated into the base language structure, and initially I assumed that would be enough to satisfy me. Others at Rubicite disagreed, and pushed me to learn how to use LINQ to SQL. Now, I am glad they did.

I am not going to go over the details of the Linq-to-SQL methodology and implementation. I'll touch on a few highlights while discussing the things I like about it, but for full details check out Scott Gu's 9-part tutorial.

Basically, the components of a Linq-to-SQL implementation include a database model that is loaded into your .Net project, coupled with LINQ queries in your code. LINQ queries look rather different than a typical SQL query, but once you wrap your mind around the structure, they are just as easy to craft. Here's an example:

Rewriting that query in Linq-to-SQL looks like:


As you can see, the two are pretty similar. I included more than just the LINQ query there to illustrate that you must define and reference a datacontext on which to base the query. That comes from the database model you loaded into the project. I left out any protections from the SQL version, but clearly if you were using this query in real life, you would want to employ some of the techniques I mentioned in my first paragraph.

So, why would I choose the LINQ version over the SQL version? On the surface, the two look pretty similar. All things being equal, most would probably stick with the SQL version as it is more likely to be familiar. I certainly would. Except that all things aren't equal. You see, in the SQL version, there's the matter of implementing the security techniques noted above. That means wrapping the query with some kind of data structure or package to do the execution, and then adding logic and/or parameterization code to ensure we aren't opening ourselves up for a world of hurt. In the LINQ query, there is no need for a separate data structure. Instead, LINQ integrates into the base language to enable strong typing directly in the query itself. That means that if the type of `t.ValueField` does not match the type of `targetValue`, then the code will fail to complie. It knows these types because it is running directly in .Net and has loaded the types from the database into the datacontext. This is a big win.

Another big win is the simplicity of dealing with the data afterwards. Rather than having an obscure resultset with a generic set of columns that may or may not actually exist, the result of a LINQ query is an object (or, rather, a collection of objects). Each column of the results is automatically built into the resulting object as a parameter. So in later code, I could access the ValueField of the first result object through `Results.first().ValueField`. Types are preserved in these objects, improving the interaction in the related code quite a bit over what you would have in a generic resultset. Certainly, you could build your own object to create from a generic resultset of an SQL query, and that would provide you many of the same benefits, but there's no comparison to the speed and ease of use that the LINQ method gives you. And should you prefer to have a custom object for the LINQ query (to provide custom methods, perhaps), you can easily do that as well.

The two drawbacks I have found to LINQ are handling of complex queries and requirement to keep the database model up-to-date. Really complex queries can result in a mess of a query, and might even give give some performance issues. The solution I use is to move queries of this complexity to stored procedures, which is probably the right thing to do in a traditional SQL approach as well. Keeping the database model up-to-date is an extra step required any time you create a new table or alter the structure of an existing one. It is a simple as a few mouse-clicks to delete the old model of the table and add the new one(s) in. Definitely worth it for all the other benefits.