Monday, February 7, 2011

JavaScript/jQuery code optimization (or famous IE bottlenecks)

After we pulled out the Infragistics grid in favor of Telerik MVC Extensions, We've had to code several features, that were available in Infragistics out of the box, ourselves. This was partly due to our data structure not being fully supported by the Telerik grid. That's a topic for another blog post though. One of the features was the ability to select several cells and do spreadsheet like functionality, such as copy/paste and fill. We were mostly testing this out in Chrome/Firefox and with small tables. However, as typically happens, after releasing into our demo environment, our BA told us that the performance was brutal.

Surprise, surprise, it turned out that our grid page ran very slowly in IE(7) with very large tables (50 rows by 35 columns). I wound up being able to trace the bottle neck down to this chunk of code.
$('tbody td', $(this.element)).each(function() {
 if (this.parentElement.rowIndex >= selection.startRowIndex && 
 this.parentElement.rowIndex <= selection.endRowIndex &&
 this.cellIndex >= selection.startColIndex && 
 this.cellIndex <= selection.endColIndex) {
  if (columns[this.cellIndex].readonly != true) {
   $(this).addClass('selected');
  }
 }
});
This is the code for when a user has a cell selected and is either dragging the mouse or holding shift and clicking to select a block of cells. What's happening here is that we're going through all cells in the table body and seeing if it is between the first selected cell and the cell located at the mouse position. I.E. whether or not the cell is in a 'box'. The problem is that we're going through every cell in the table body, even if only one has been selected.

The first optimization was to only go through the rows/columns that were in the selected box, so to speak. This was done using the jQuery slice method. The code is below

$('tbody tr', this.element)
 .slice(selection.startRowIndex, selection.endRowIndex + 1)
 .each(function() {
  $(this).children('td')
   .slice(selection.startColIndex, selection.endColIndex + 1)
   .each(function() {
    if (!columns[this.cellIndex].readonly) {
     $(this).addClass('selected');
    }
   });
 });

The performance improved a lot when the selection box was small, but if the user selected anything bigger than a 4 row by 10 column box the script crawled again. Each row would take approximately 190 ms to run in IE7. So if the user selected 10 rows it would take almost 2 seconds to run. The problem seemed to be the readonly check. If it was taken out, the timing dropped to a fraction of a millisecond.

The problem is that Internet Explorer is very slow processing conditional statements. Doing some unscientific tests on this site I got around the following numbers:

BrowserAverage Time
Chrome 80.02 ms
Firefox 4 b100.66 ms
Internet Explorer 852 ms
Internet Explorer 794 ms

So we had to find a way to optimize that conditional statement. I finally found a jQuery call that seemed to do the trick.

$('tbody tr', this.element)
 .slice(selection.startRowIndex, selection.endRowIndex + 1)
 .each(function() {
  $(this).children('td')
   .slice(selection.startColIndex, selection.endColIndex + 1)
   .not(function() {
    return columns[this.cellIndex].readonly;
   })
   .addClass('selected');
   });

After going through the jQuery code, I'm still not 100% sure why the performance is better with the last statement. A couple of things I've noticed is that using each to do a check on every item seems to be slightly more expensive than not (or for that matter filter). Since each also does a check to make sure the callback hasn't returned false. Also addClass has a lot of overhead so it's quite inefficient to run it on elements one by one. Not sure if that's necessarily the problem either though because the 2nd code snippet ran very quickly when the readonly check was taken out.

If anyone has any ideas on why they think the 3d code snippet runs much faster than the 2nd one I would love to hear it. Until then this just another example of why it's important to optimize your jQuery code.

Monday, January 31, 2011

Using CVS and Git in tandem

VCS systems are indispensable for software developers. They provide many benefits: team development, code recovery, history tracking, and many more. Everyone has their favorite, but it's not always possible in a corporate environment to use the one you like. On my current project, the corporate standard is CVS. It's been a while since I've used it. Not to bad mouth CVS, but the shine has really worn off this VCS in the last decade.

I decided to set up a local Git repository to use for development and use the CVS repository for large scale commits. This was working well, until a second developer came on board. We had to set up a system in which merges were not a giant pain in the butt. The system we eventually came to use works like this. We both have our local Git repositories in which we do our feature development. We also set up a bare repository on the build server that we push to once we've completed our features. This central repository is also used for our continuous build integration.

The last part of the setup is a separate folder that contains the CVS checkout. I've named it CVS. This folder also contains another Git repository. The process of checking into CVS goes as follows. We first do a CVS update to bring our checkout in line with any CVS checkins the other developer may have done. We do this because Git is far more forgiving with brand new files that the VCS is trying to bring in that may already exist on the file system. After we have synchronized our checkout with the latest from CVS, we then do a code pull from our central bare repository. At this point we have to go through and make sure we manually do a `cvs remove` on any files that may have been deleted during our feature development. We also have to do a `cvs add` on any new files. Once all that has been done we are ready to do a check in.

So that's it. The process saves us a lot of headaches while we are doing our feature development and we are able to use our pet VCS without too much trouble.

Thursday, January 27, 2011

Debugging JavaScript

The tools available to web developers have grown a lot in the last 5 years. Time was, the only way to debug JavaScript was to pepper your code with alert statements. Now you can get a variety of plugins and tools that allow you to examine HTML, CSS, JavaScript, and other elements of a web page. Here are some of the better tools out there:

Firebug
One of the best debugging tools out there. Allows you to inspect and modify html elements and style. Comes with a great JavaScript debugger and allows you to examine network usage. Available as an add on for Firefox. (Note if you're using Firefox 4.0 beta make sure you are using version 1.7 of firebug and Firefox 4.0 beta 10 as previous versions are broken)
Internet Explorer Developer Toolbar
Let's face it, if you've developed any kind of web applications, you've grown to despise IE. With its quirks and non standard implementation of all things html, css, and JavaScript , IE is one of the biggest pain points for any web developer. The internet explorer developer toolbar tries to ease some of that pain by providing the ability to examine html and make modifications to style
Visual Studio
A decent tool for debugging JavaScript in IE. Can be a bit of a pain, but contains everything a debugger should: call stack, breakpoints, watch variables, and quick watch.
Chrome Developer Tools
One of the easiest browsers to develop for, also comes with a great set of developer tools. View and change html elements, trace styles, debug JavaScript like a pro, examine loaded resources including XHR content. It also provides optimization and performance analysis

All of these tools are great, but what prompted me to write this blog entry is another great tool out there, JSFiddle. If you've ever tried debugging a small JS problem you know it's a real pain to constantly update your code, reload the page and step through a multitude of lines that your page might contain. JSFiddle provides a great way to isolate problem code and debug it. JSFiddle allows you to enter your HTML, CSS, and JavaScript then run it all in one page. It also allows you to easily load different versions of most of the common OSS JS frameworks that exist. Not only that, but it also provides versioning of your "fiddles". If you haven't signed up for an account I highly recommend it. You don't need an account to use the site, but if you want to save and share your fiddles you will need to have an account.

Saturday, January 22, 2011

Dynamic stored procedures without using IFs

Have you ever written a stored procedure like this?
create procedure GetProducts
   @id int
AS BEGIN
   if @id is not null
      SELECT *
      FROM Products
      WHERE id = @id
   ELSE
      SELECT *
      FROM Products
END
Here's a tip that DBAs and sql developers have known about for a while. You can shorten your query to this
create procedure MyProc
   @id int
AS
   SELECT *
   FROM Products
   WHERE (@id is null OR id = @id)
You should take care to examine the execution plan and make sure that it's not doing a table scan. Otherwise performance will suffer.

Monday, January 17, 2011

The NHibernate Anti Pattern

On Stack Overflow A lot of NHibernate questions come with a code snippet similar to the following

publlc IList<MyObject> GetObjects()
{
   using (ISession session = sessionFactory.OpenSession())
   {
      return session.CreateCriteria<MyObject>().List<MyObject>();
   }
}

What's wrong with this code? Well at first glance not much. We're opening a session doing our query and then cleaning up any open resources.

Well if we were dealing with ADO.NET, opening a connection, doing our work, then closing the connection is a typical process. However, NHibernate sessions are a lot more than database connections. One of the biggest benefits is the first level cache. The session caches objects that have been retrieved by their id as well as saved objects. This means that if you've retrieved an object once, as long as you haven't closed the session, any time you retrieve the object again, NHibernate will get you the cached version. This saves a round trip to the database.

Another benefit is batching database writes. Since updates are stored in the cache, NHibernate doesn't need to write them to the database until it needs to. If you let NHibernate flush automaticially it can save its updates until it needs to and then run them all in one round trip to the database.

What about lazy loading? If our MyObject class looks like this:
public class MyObject
{
   public int Id { get; set; }
   public AnotherObject Reference { get; set; }
}
and we wanted to access the Reference field we would need to eagerly load each associated AnotherObject. This is because NHibernate uses proxy objects. When your references are lazy loaded and you access the Reference field, NHibernate will use the same session the original object was loaded with to fetch the associated reference. If the session has been closed you will receive an Exception telling you that the associated session has been closed.

So if the pattern in the first code snippet is a bad one, which pattern should we be following? The folks over at NHibernate recommend the Unit of Work pattern. The idea is that you create a session and keep it alive for the duration of a so called "conversation". An example of a conversation could be the HttpRequest in a web application. So a session would be opened at the beginning of the request and at the end of the request would be closed and disposed. You can even wrap the session in a transaction for extra reliability.

On a windows app it's a little more tricky. One way to define a unit of work is in the context of a thread. Another might be to handle a user action, For example if a user action triggers a button to update some data, a new session would be created. Then closed after all work has been completed.

Finally you can use your favorite IoC container to define how and when a new session is created and then your data access layer can be injected with the reusable session.

Saturday, January 8, 2011

Rendering Asp.Net WebForm via an MVC action

As part of our rewrite of WebForms to MVC we have some reports that are generated via Infragistics. I know I said in the previous post that the reason we're doing the rewrite is because we hate Infragistics. However, one thing that Infragistics does well is to give a lot of power to the developer for developing reports.

There is no GUI for report development and the documentation is virtually non existent, but if you need dynamic reports that are generated at run time it's a great library to use.

I won't bore you with implementation details for our reports, but the fact of the matter is that we need to render a WebForm. How do you do that inside a Controller action? Well fortunately the folks at Microsoft have thought of this scenario. There is an IView implementation called WebFormView. So if you need to return a classic Asp.Net Web Form as part of your action, you can do it like so.

public ActionResult Index()
{
   return View(new WebFormView("~/path/to/your/webform.aspx"));
}

The only catch is that your WebForm, must inherit from System.Web.Mvc.ViewPage, rather than System.Web.UI.Page. Behind the scenes it will call the BuildManager to build your web form and call the ProcessRequest method with HttpContext.Current. This means that the page is taken through the Asp.Net page life cycle.

You can even serve up WebControls like this. Again they have to inherit from System.Web.Mvc.ViewUserControl. There is one more limitation. That is that the user control can either only have html elements (albeit they can have the runat=server attribute). Or if you really need server controls, then the user control must contain a form with the runat=server attribute.

One final note. If for whatever reason you cannot change your web forms to inherit from ViewPage or you need to access the compiled instance of the Page, you can still use them to render content inside a controller action like so

public ActionResult Index()
{
   var page = BuildManager
.CreateInstanceFromVirtualPath("~/path/to/your/webform.aspx", typeof(Page));
   page.ProcessRequest(HttpContext.Current);
   return new EmptyResult();
}

Friday, December 17, 2010

Rewriting WebForms to MVC

This week we finally started to port our project from ASP.NET WebForms to ASP.NET MVC. We made this decision, because we were so sick of the grief the Infragistics Grid control was causing us. Not only does it output horrific html. Our dynamic data structure also causes us way too much grief with the grid.

The main problem with Infragistics is that the product is so mature that there are half a dozen ways to do any particular thing. Since the documentation is so scant and the community so poor, it's hard to figure out how to actually accomplish what it is you're trying to do. A lot of the time it just feels like you're working inside a glove box and glove box has been spray painted black.


There really doesn't seem to be a method to solve a particular issue, just a lot of fiddling. Setting a property to this, handle this event, call this method on the column, etc. So this frustration has finally boiled over and we've begun rewriting the app to pull out Infragistics and at the same time move our app to MVC.

I was dreading doing the port. However, so far it has been quite easy. Fortunately for us, when we designed our application, we designed with our own MVC style pattern that sat on top of WebForms. Similar to what they've done over at WebFroms MVP So the code behind was only used for handling the UI. All of the complex business logic was passed on to Controllers. So for the most part it has been a fair amount of copy and paste. We haven't gotten to the grid portion so I may be paying for this early optimism yet, but I have been pleasantly surprised with our progress so far.

Addendum
We had to keep around a couple of server controls that were reports. So we had to find a way to render WebForms as part of an MVC action. Here's how we did it.