Monday, January 31, 2011

Using CVS and Git in tandem

VCS systems are indispensable for software developers. They provide many benefits: team development, code recovery, history tracking, and many more. Everyone has their favorite, but it's not always possible in a corporate environment to use the one you like. On my current project, the corporate standard is CVS. It's been a while since I've used it. Not to bad mouth CVS, but the shine has really worn off this VCS in the last decade.

I decided to set up a local Git repository to use for development and use the CVS repository for large scale commits. This was working well, until a second developer came on board. We had to set up a system in which merges were not a giant pain in the butt. The system we eventually came to use works like this. We both have our local Git repositories in which we do our feature development. We also set up a bare repository on the build server that we push to once we've completed our features. This central repository is also used for our continuous build integration.

The last part of the setup is a separate folder that contains the CVS checkout. I've named it CVS. This folder also contains another Git repository. The process of checking into CVS goes as follows. We first do a CVS update to bring our checkout in line with any CVS checkins the other developer may have done. We do this because Git is far more forgiving with brand new files that the VCS is trying to bring in that may already exist on the file system. After we have synchronized our checkout with the latest from CVS, we then do a code pull from our central bare repository. At this point we have to go through and make sure we manually do a `cvs remove` on any files that may have been deleted during our feature development. We also have to do a `cvs add` on any new files. Once all that has been done we are ready to do a check in.

So that's it. The process saves us a lot of headaches while we are doing our feature development and we are able to use our pet VCS without too much trouble.

Thursday, January 27, 2011

Debugging JavaScript

The tools available to web developers have grown a lot in the last 5 years. Time was, the only way to debug JavaScript was to pepper your code with alert statements. Now you can get a variety of plugins and tools that allow you to examine HTML, CSS, JavaScript, and other elements of a web page. Here are some of the better tools out there:

One of the best debugging tools out there. Allows you to inspect and modify html elements and style. Comes with a great JavaScript debugger and allows you to examine network usage. Available as an add on for Firefox. (Note if you're using Firefox 4.0 beta make sure you are using version 1.7 of firebug and Firefox 4.0 beta 10 as previous versions are broken)
Internet Explorer Developer Toolbar
Let's face it, if you've developed any kind of web applications, you've grown to despise IE. With its quirks and non standard implementation of all things html, css, and JavaScript , IE is one of the biggest pain points for any web developer. The internet explorer developer toolbar tries to ease some of that pain by providing the ability to examine html and make modifications to style
Visual Studio
A decent tool for debugging JavaScript in IE. Can be a bit of a pain, but contains everything a debugger should: call stack, breakpoints, watch variables, and quick watch.
Chrome Developer Tools
One of the easiest browsers to develop for, also comes with a great set of developer tools. View and change html elements, trace styles, debug JavaScript like a pro, examine loaded resources including XHR content. It also provides optimization and performance analysis

All of these tools are great, but what prompted me to write this blog entry is another great tool out there, JSFiddle. If you've ever tried debugging a small JS problem you know it's a real pain to constantly update your code, reload the page and step through a multitude of lines that your page might contain. JSFiddle provides a great way to isolate problem code and debug it. JSFiddle allows you to enter your HTML, CSS, and JavaScript then run it all in one page. It also allows you to easily load different versions of most of the common OSS JS frameworks that exist. Not only that, but it also provides versioning of your "fiddles". If you haven't signed up for an account I highly recommend it. You don't need an account to use the site, but if you want to save and share your fiddles you will need to have an account.

Saturday, January 22, 2011

Dynamic stored procedures without using IFs

Have you ever written a stored procedure like this?
create procedure GetProducts
   @id int
   if @id is not null
      SELECT *
      FROM Products
      WHERE id = @id
      SELECT *
      FROM Products
Here's a tip that DBAs and sql developers have known about for a while. You can shorten your query to this
create procedure MyProc
   @id int
   FROM Products
   WHERE (@id is null OR id = @id)
You should take care to examine the execution plan and make sure that it's not doing a table scan. Otherwise performance will suffer.

Monday, January 17, 2011

The NHibernate Anti Pattern

On Stack Overflow A lot of NHibernate questions come with a code snippet similar to the following

publlc IList<MyObject> GetObjects()
   using (ISession session = sessionFactory.OpenSession())
      return session.CreateCriteria<MyObject>().List<MyObject>();

What's wrong with this code? Well at first glance not much. We're opening a session doing our query and then cleaning up any open resources.

Well if we were dealing with ADO.NET, opening a connection, doing our work, then closing the connection is a typical process. However, NHibernate sessions are a lot more than database connections. One of the biggest benefits is the first level cache. The session caches objects that have been retrieved by their id as well as saved objects. This means that if you've retrieved an object once, as long as you haven't closed the session, any time you retrieve the object again, NHibernate will get you the cached version. This saves a round trip to the database.

Another benefit is batching database writes. Since updates are stored in the cache, NHibernate doesn't need to write them to the database until it needs to. If you let NHibernate flush automaticially it can save its updates until it needs to and then run them all in one round trip to the database.

What about lazy loading? If our MyObject class looks like this:
public class MyObject
   public int Id { get; set; }
   public AnotherObject Reference { get; set; }
and we wanted to access the Reference field we would need to eagerly load each associated AnotherObject. This is because NHibernate uses proxy objects. When your references are lazy loaded and you access the Reference field, NHibernate will use the same session the original object was loaded with to fetch the associated reference. If the session has been closed you will receive an Exception telling you that the associated session has been closed.

So if the pattern in the first code snippet is a bad one, which pattern should we be following? The folks over at NHibernate recommend the Unit of Work pattern. The idea is that you create a session and keep it alive for the duration of a so called "conversation". An example of a conversation could be the HttpRequest in a web application. So a session would be opened at the beginning of the request and at the end of the request would be closed and disposed. You can even wrap the session in a transaction for extra reliability.

On a windows app it's a little more tricky. One way to define a unit of work is in the context of a thread. Another might be to handle a user action, For example if a user action triggers a button to update some data, a new session would be created. Then closed after all work has been completed.

Finally you can use your favorite IoC container to define how and when a new session is created and then your data access layer can be injected with the reusable session.

Saturday, January 8, 2011

Rendering Asp.Net WebForm via an MVC action

As part of our rewrite of WebForms to MVC we have some reports that are generated via Infragistics. I know I said in the previous post that the reason we're doing the rewrite is because we hate Infragistics. However, one thing that Infragistics does well is to give a lot of power to the developer for developing reports.

There is no GUI for report development and the documentation is virtually non existent, but if you need dynamic reports that are generated at run time it's a great library to use.

I won't bore you with implementation details for our reports, but the fact of the matter is that we need to render a WebForm. How do you do that inside a Controller action? Well fortunately the folks at Microsoft have thought of this scenario. There is an IView implementation called WebFormView. So if you need to return a classic Asp.Net Web Form as part of your action, you can do it like so.

public ActionResult Index()
   return View(new WebFormView("~/path/to/your/webform.aspx"));

The only catch is that your WebForm, must inherit from System.Web.Mvc.ViewPage, rather than System.Web.UI.Page. Behind the scenes it will call the BuildManager to build your web form and call the ProcessRequest method with HttpContext.Current. This means that the page is taken through the Asp.Net page life cycle.

You can even serve up WebControls like this. Again they have to inherit from System.Web.Mvc.ViewUserControl. There is one more limitation. That is that the user control can either only have html elements (albeit they can have the runat=server attribute). Or if you really need server controls, then the user control must contain a form with the runat=server attribute.

One final note. If for whatever reason you cannot change your web forms to inherit from ViewPage or you need to access the compiled instance of the Page, you can still use them to render content inside a controller action like so

public ActionResult Index()
   var page = BuildManager
.CreateInstanceFromVirtualPath("~/path/to/your/webform.aspx", typeof(Page));
   return new EmptyResult();