Wednesday, November 26, 2014

npm link with interdependent modules

Sometimes you need to develop on multiple interdependent node modules at the same time. When developing on a dependent module, in order to test your changes, you can do one of several things:
  1. Publish your module and run npm update
  2. Copy and paste code into your node_modules folder
  3. Use npm link to use the in progress version of your module in your other applications
Out of any of the above options only 3 really scales and doesn't leave much potential for errors. Before going any further let's review what npm link does.

Let's say you have 2 modules:
  • dependent-module
  • master-module
master-module also has a dependency on dependent-module. Running npm link inside the dependent-module directory will create a symlink from that directory to the global node_modules/dependent-module directory. Then when running npm link dependent-module in the master-module directory will add dependent-module as a symlink inside the local node_modules directory. This allows you to make changes to dependent-module and instantly have them available inside master-module.

This works great until you have a module that is used in both of your modules that also has global state. A good example is mongoose, the excellent mongodb odm. Schemas and models are defined on the global mongoose object. If each of your modules has mongoose installed, they will have 2 different mongoose objects. So if your models are defined in one module, those models won't be available in your other module. Normally this behavior doesn't happen because npm is smart enough not to install module dependencies if they have already been installed, but when using npm link this won't be the case.

In order to avoid this problem, you will need to run npm link on any susceptible modules. So in our above example if both modules need mongoose, you will need to run npm link mongoose in each affected module directory.

Something to also keep in mind is that running npm update will destroy the symlinks, meaning you need to run npm link dependent-module and npm link mongoose after your run npm update.

Friday, August 29, 2014

AngularJS custom scrollbar directive

On a recent project I needed to create a widget that had a similar look and feel across multiple platforms. This widget had a limited size and needed scrolling. When developing on my Macbook the scrollbars were working fine and everything seemed ok. As soon as we started testing it in Windows, the scrollbar width caused an issue with the rendering pushing all the content down.

The solution that we decided on was to use a custom scrollbar. After scouring the web for custom scrollbars in JavaScript I found TinyScrollbar. I liked it because it didn't use jQuery as we were using AngularJS and didn't want to bog down our app with jQuery. I decided to tweak it and port it to AngularJS as a directive.

It's available on github and via bower (bower install ng-tiny-scrollbar) and a demo can be seen here.

Wednesday, July 9, 2014

npm install ENOENT errors

I've been working on a project that is built using NodeJS. Our development environment is a bit of a hybrid. Most of the developers are developing on Macs, with one exception. Our build server is on Windows and our site is running in Windows Azure. For the most part developing NodeJS on a Mac is quite straightforward. All the tooling is readily available and tends to work quite well. Not so much on Windows. So when our build server starting to run into issues installing the requestify package, we chalked it up to something not liking Windows.

We were getting ENOENT errors where npm was complaining that it couldn't find files that it was supposed to download on the file system. Since the error was happening installing a dependency of a dependency, we thought maybe it was a long paths issue. However, we couldn't even get it to install running in the root directory. We decided to use a work around to switch to a different version of that particular dependency.

Weeks later I ran into the same issue, but this time on my Mac. I found a solution by running npm cache clean before running npm install again. I tried it on the Windows build server and it started working again.

TL;DR;
If you're seeing ENOENT errors when running npm install try running npm cache clean and try again.

Monday, December 23, 2013

Thursday, November 14, 2013

Simple TypeMock Auto-Mocking container

TDD is a great methodology for simplifying your code and focusing on delivering business value. When doing TDD, we always want to strive to make it as friction-free as possible. This means making writing tests and, more importantly, refactoring as painless as we can. One of the ways we can do that is to use an Auto-Mocking container.

A very common reason for tests to change is because we added a new dependency to our SUT and we have to modify all of our previous tests to use this new dependency. One way to avoid this is to use an object that will build your SUT and inject all appropriate dependencies as mocked objects. So your tests go from looking like this:

dependency1 = new Mock<IDependency1>();
dependency2 = new Mock<IDependency2>();
sut = new MyClass(dependency1, dependency2);

to this:

mocker = new Automocker();
sut = mocker.CreateSut<MyClass>();
dependency1 = mocker.GetMock<IDependency1>();
dependency2 = mocker.GetMock<IDependency2>();

We can now add another dependency without having to modify this test (assuming that this dependency is not used in this test).

Most of the popular mocking frameworks that are in use today (Moq, Rhino.Mocks, NSubstitute, etc) all have 1 or more Auto-Mocking frameworks that can be downloaded via NuGet. On our current project, we are using TypeMock, which is a very powerful, commercial grade, mocking tool. It can mock out almost anything and is used quite extensively in the SharePoint development world. There is no Auto-Mocking container for it, so I decided to write one.

An Auto-Mocking container needs to be able to do 2 things. Generate a SUT object with all dependencies mocked out and be able to retrieve the dependencies that were injected. TypeMock uses the following method to generate a mock object

Isolate.Fake.Instance<IDependency>();

Since it uses a static generic method, there is no way to call this method with a Type object without using reflection. So we will use reflection to call this method. We can get the MethodInfo object for this method in the following way:

TypeMockInstance = Isolate.Fake.GetType()
        .GetMethods()
        .Where(m => m.Name == "Instance")
        .Select(m => new
             {
              Method = m,
              Params = m.GetParameters(),
              Args = m.GetGenericArguments()
             })
        .Where(x => x.Params.Length == 0
           && x.Args.Length == 1)
        .Select(x => x.Method)
        .First();

and we would use it like so:

TypeMockInstance.MakeGenericMethod(type).Invoke(Isolate.Fake, null);

When creating our SUT we need to use reflection to get the types of the constructor parameters and use the method above to create them. In order to retrieve them at a later point we need to create a registry, which will simply be a Dictionary<Type, object>; In the case of multiple constructors we will pick the largest constructor. Here's what our CreateSut method looks like:

public T CreateSut<T>()
{
 var type = typeof (T);
 var constructors = type.GetConstructors(BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance);
 //get largest constructor
 ConstructorInfo largestConstructor = null;
 foreach (var constructor in constructors)
 {
  if (largestConstructor == null ||
   largestConstructor.GetParameters().Length < constructor.GetParameters().Length)
  {
   largestConstructor = constructor;
  }
 }
 //generate mocks and build up parameter list
 var parameters = new List<object>();
 foreach (var parameterInfo in largestConstructor.GetParameters())
 {
  if (!_typeRegistry.ContainsKey(parameterInfo.ParameterType))
  {
   _typeRegistry[parameterInfo.ParameterType] = GetMockInstance(parameterInfo.ParameterType);
  }
  parameters.Add(_typeRegistry[parameterInfo.ParameterType]);
 }
 return (T) largestConstructor.Invoke(parameters.ToArray());
}

To get our generated mock objects we can grab them from our dictionary after our SUT has been created. The full source for the Automocker can be found on Github.

Tuesday, June 18, 2013

Implementing an external service lookup cache

We have a form in our web application that takes an address. The field is an auto-complete field and as the user types their address, the form shows them matching addresses. There is a web service that is used in this organization to help with address lookup. This service takes a string (partial address) and returns a list of addresses that start with that particular string, up to a maximum of 1000 addresses. I wanted to implement a cache of this look up and it turned out to be an interesting little problem. How do we cache the addresses and more importantly, how can we figure out if we should hit the server or the cache?

An initial thought was that we could store all addresses from the service in a HashSet to avoid duplicates. We could check if there are any items in the cache that start with the search string and if so, we can return those items. Here's what this lookup class might look like.

public class AddressLookup
{
 public static AddressLookup Instance { get; private set; }

 private ISet<AddressType> Addresses
 {
  get
  {
   lock (this)
   {
    if (HttpRuntime.Cache["Addresses"] == null)
    {
     HttpRuntime.Cache["Addresses"] = new HashSet<AddressType>();
    }
    return (ISet<AddressType>)HttpRuntime.Cache["Addresses"];

   }
  }
 }

 static AddressLookup()
 {
  Instance = new AddressLookup();
 }

 public IEnumerable<AddressType> GetByPartialAddress(string searchAddress)
 {
  lock (this)
  {
   searchAddress = searchAddress.ToUpper();
   var cachedAddresses = Addresses.Where(address => address.FormattedAddress.StartsWith(searchAddress, StringComparison.InvariantCultureIgnoreCase)).ToArray();
   if (cachedAddresses.Length > 0)
   {
    return cachedAddresses;
   }

   var addresses = new AddressManager().RetrieveByPartialAddress(searchAddress);
   Addresses.UnionWith(addresses);
   return addresses;
  }
 }
}

This idea works, but consider a worst case performance issue of traversing a large collection of values, not finding the value and then making a call to the web service after all. Not to mention the issue of having an incomplete cache of values due to the web service only returning the top 1000 results. For an illustration of the issue consider a web service that only returns the top 2 results.

Total data set1, 12, 122
Uncached results for search string "1"1, 12
Uncached results for search string "12"12, 122
Cached results for search string "12" after searching string "1"12

To solve this issue I decided in addition to caching the address list, I would also cache the searches. This will fix both issues, the first by improving cache check performance, the second by returning a better result set from the cache. Below is the implementation:

public class AddressLookup
{
 public static AddressLookup Instance { get; private set; }

 private ISet<string> Searches
 {
  get
  {
   lock (this)
   {
    if (HttpRuntime.Cache["AddressSearches"] == null)
    {
     HttpRuntime.Cache["AddressSearches"] = new HashSet<string>();
    }
    return (ISet<string>)HttpRuntime.Cache["AddressSearches"];
   }
  }
 }

 private ISet<AddressType> Addresses
 {
  get
  {
   lock (this)
   {
    if (HttpRuntime.Cache["Addresses"] == null)
    {
     HttpRuntime.Cache["Addresses"] = new HashSet<AddressType>();
    }
    return (ISet<AddressType>)HttpRuntime.Cache["Addresses"];

   }
  }
 }

 static AddressLookup()
 {
  Instance = new AddressLookup();
 }

 public IEnumerable<AddressType> GetByPartialAddress(string searchAddress)
 {
  lock (this)
  {
   searchAddress = searchAddress.ToUpper();
   if (Searches.Contains(searchAddress))
   {
    return Addresses.Where(address => address.FormattedAddress.StartsWith(searchAddress, StringComparison.InvariantCultureIgnoreCase));
   }

   Searches.Add(searchAddress);
   var addresses = new AddressManager().RetrieveByPartialAddress(searchAddress);
   Addresses.UnionWith(addresses);
   return addresses;
  }
 }
}

The next potential for improvement would be to implement a rolling cache. One that only keeps the n most recent searches. I haven't taken this implementation that far, but a potential implementation might be to associate the search results on a per search basis in a dictionary, rather than keeping a global address list. Let me know how you might implement a rolling cache or if you have any improvement suggestions with my implementation.

Thursday, February 28, 2013

How I feel as a developer

Here's a blog with a bunch of animated gifs illustrating what it's like to be a developer.