Finding Out which Groups a User is a Member Of When Using Windows Authentication in Asp.Net

March 13, 2008

When using Windows authentication with Asp.net, I often need to know which active directory groups a user is a member of. Now I know that you can do something like:

if (User.IsInRole("Admin"))
{
    //Give Access to Secrets
}

The problem with this is you need to know the name of the group ahead of time. And what if you are on a network where the full name of a group is not always clear. The actual group name may be “MyDomain\Admin”. So I wrote up a quick way to just get a list of all the groups a user is a member of. It isn’t super straight forward (as far as which types you need to cast to) so I thought I would list it out here:

public static List<string> GetGroups(RolePrincipal user)
{
    List<string> groups = new List<string>();

    WindowsIdentity identity = p.Identity as WindowsIdentity;
    foreach (IdentityReference group in identity.Groups)
    {
        NTAccount account = (NTAccount)group.Translate(typeof(NTAccount));

        groups.Add(account.Value);
    }

    return groups;
}

the user of it on a web page would be something like:

List<string> groups = GetGroups(User as RolePrincipal);

Keep in mind that this is assuming you are using Windows Authentication. So the weird part of the code above is:

NTAccount account = (NTAccount)group.Translate(typeof(NTAccount));

if you do not get this step, you will just get a bunch of Active Directory IDs that won’t do you much good.

Also, sorry about the long title. I just can’t think of a clever title today. Maybe I should add something like “Ultimate Edition for Developers” on the end to make it extra clear.

kick it on DotNetKicks.com


Using BlogEngine.net as a General Purpose Content Management System – Part I

February 28, 2008

So I keep running into the same problem – I am building a small website for somebody (in this case, my Mom) and I need to provide them with a way to update the content of their site so I don’t have to. Basically, I need a lightweight and flexible content management system that is easy to use.

In this series of posts, I will show how I converted a small website from just standard .aspx pages into a site where all pages are editable by Windows Live Writer and via an online interface. In Part I of this series I will just set some background on how I am approaching the creation of this lightweight CMS.

If The Shoe Fits…

cmsWhen I first thought of a lightweight CMS, I thought of graffiti. It sounds like exactly what I need. So I downloaded the express edition and started evaluating it. It seemed like a nice product and all is not free for commercial use ($399 is the cheapest commercial licence) and I can’t afford that price tag when building small websites.

Enter BlogEngine.net. My favorite blogging platform. There, I said it. I host my blog on wordpress but I like BlogEngine.net better. In fact, I will probably be migrating to BlogEngine.net in the near future. How do I know I like it so much? Well, I use it to run my wife’s blog and I am constantly tinkering around with her site all of the time because I enjoy using BlogEngine.net so much.

I thought that BlogEngine.net has all of the key pieces I needed for my lightweight CMS:

  1. A WYSIWYG Editor
  2. A Metaweblog interface
  3. Tons of extensibility

Basic Idea

I decided to base my CMS implementation on the concept of pages. Most blog engines have two distinct types of content: pages and posts. Posts are the typical type of content that becomes part of your blogs feed whereas pages are usually static content which can be anything outside of a blog post (for example an ‘About Me’ page). BlogEngine.net already has everything I need to get the content of page created and persisted in a data store (it supports xml and sql server out of the box). I decided to write a web control which I can place on any webpage and include the contents of a given page from the data store.

I made a control called PageViewer which you can place on the page like this:

<blog:PageViewer ID="view" runat="server" DisplayTitle="false"
    PageId="167eb7f3-135b-4f90-9756-be25ec10f14c" />

This control basically just looks up the page using the given id (this functionality is all provided by the existing BlogEngine.Core library) and displays its content. Here is the rendering logic

BlogEngine.Core.Page page = null;
if (PageId != Guid.Empty)
    page = BlogEngine.Core.Page.GetPage(PageId);

if (page != null)
{
    ServingEventArgs arg = new ServingEventArgs(page.Content, ServingLocation.SinglePage);
    BlogEngine.Core.Page.OnServing(page, arg);

    if (arg.Cancel)
        Page.Response.Redirect("error404.aspx", true);

    if (DisplayTitle)
    {
        writer.Write("<h1>");
        writer.Write(page.Title);
        writer.Write("</h1>");
    }

    writer.Write("<div>");
    writer.Write(arg.Body);
    writer.Write("</div>");
}

This code is pretty straight forward – all it does is get an instance of the page and then display its title in <h1> a tag and its body in <div> tag. This logic is actually straight from the existing page retrieval code that already exists in BlogEngine.net. This web control is pretty much the only new code I had to write. The rest of the project mostly involves moving files around and removing parts of the BlogEngine.net framework that I don’t need.

Armed with this control, we are ready to start converting the static pages from the old version of the website to be BlogEngine.net pages which can be stored and retrieved using the BlogEngine.Core classes.

In part II of this series, I will cover what changes I made to the website project used for BlogEngine.net blogs to make it function like a straight up website, not a blog. Any feedback is welcome.

kick it on DotNetKicks.com


Google Charts for Asp.Net now on Codeplex

February 10, 2008

image I have received very positive feedback on my Asp.Net control for Google charts so I decided to place it up on Codeplex to allow people to participate and add code to the project.

I am currently working on a project roadmap so please let me know if you are interested in participating. One feature I have already begun work on is giving the control support for data binding.

Here is some example data binding code I have got working:

protected void Page_Load(object sender, EventArgs e)
{
    chart.DataSource = GetDataSource();
    chart.DataBind();
}

private DataTable GetDataSource()
{
    DataTable table = new DataTable();
    
    table.Columns.Add("Type", typeof(string));
    table.Columns.Add("Jan", typeof(float));
    table.Columns.Add("Feb", typeof(float));
    table.Columns.Add("Mar", typeof(float));

    table.Rows.Add("Men", 68, 78, 88);
    table.Rows.Add("Women", 68, 58, 78);
    table.Rows.Add("Both", 88, 48, 98);
    return table;
}

I am just binding to a simple DataTable which has one column containing the labels for the chart and multiple other columns which contain the data for the chart. The code above produces the following chart:

image

I look forward to seeing this project improve.

kick it on DotNetKicks.com


C# Language Improvements: Know What You Are Getting Into

February 6, 2008

The C# language has changed a lot since its initial release in 2000. C# 2.0 brought us:

imageC# 3.0 brought us:

And that is just the new language features in the last 3 years! This does not include all of the new classes added to the BCL over the same time period.

Know the Cost

I am as excited about these new changes as the next guy. I have used various combinations of these improvements in my projects with great success. But is there a cost to new features?

Recently I read a couple of articles that reminded me that you not only have to understand how a feature is used, you also have to understand the cost of using it. Sometimes that cost is performance, other times it could be something like readability or portability. I’ll take one example from the 2.0 framework and then one from the 3.5 framework to show that you need to understand the implications of using new features.

Yield Return

First, there is Fritz Onion’s recent post on the amount of code generated by the yield return syntax introduced in C# 2.0:

The array allocation function generated a total of 20 lines of IL, but the yield return function, if you included all of the IL instructions for the IEnumerable class generation as well was over 100! That’s a 5x penalty in code generation to save 18 characters of typing

So is the 5x penalty ever worth it? Yes, of course it is. There are things you can do with yield that you couldn’t do previously. In most cases, the use of yield improves readability dramatically. It also makes the task of creating a custom enumerator much easier.

In his article The power of yield return, Joshua Flanagan points to an example where using the yield return improved performance dramatically:

With the improved implementation that took advantage of the yield keyword, the program was able to finish its job in less than half the time! It also used much less memory, as it never had to store all 9 strings in a collection. Now imagine the potential impact if GetCombinations returned a collection with thousands of entries!

The point here is that you have to know the costs and how they fit in with the requirements of the project you are working on. If you are building something that must support thousands of concurrent users and must be very performant, you will most likely choose to use the yield return despite the extra code generated by the compiler.

LINQ is Dead, Long Live LINQ

Most of the time when I show somebody the new language enhancements in C# 3.0, the thing that interests them the most (and rightly so) is LINQ. It is just so different to see query syntax built right into the language. The next question they usually ask is how the performance of something written using LINQ would compare the same procedure written using just C# 2.0 control structures.

Steve Hebert recently wrote a post about LINQ performance where he concludes:

Despite my best efforts I just couldn’t make my hand-written code perform as poorly as Linq

What? Seriously? Wow. Then I guess we should stay away from LINQ then right? Well, if you look closer at his post, you will notice that he has optimized his non LINQ version of the code a little bit for the underlying data structure where LINQ has to be able to operate regardless of the underlying data structure. I am not so much interested in the particulars so much as I am the idea that you should understand these things before you start replacing all of your for loops with LINQ queries.

Shifting Sands

One more recent post that fits in here. Rob Conery wrote about his recent experience where his 5 year old code has come back to haunt him:

I spose the moral of the story is to always view the concept of maintenance with an eye towards shifting toolsets and platforms. In 4 years you will need to support the ASP.NET 2.0 site you’re on now, using Visual Studio 2012 and it’s Silverlight-generated Scaffolds :).

This is a great reminder of how tough it can be to deal with changes in technology. I mean, come on, he couldn’t find a machine to compile his 5 year old code?!? I know we can all relate. With the technology learning curve growing at a semi exponential way, just imagine where that same code will be in another 5 years.

Back to Machine Code

Obviously I am not advocating that we just give up on new features because they might not be as fast or resource efficient as our hand coded machine code. In fact, I think we need to embrace abstractions wherever possible. They make our jobs as software developers much easier. Just be smart about it. Have some idea what is going on behind the covers. It is important to realize that yield return eventually becomes a bunch of compiler generated code and that LINQ queries are general case and they may not always be as great as manually doing the coding yourself.

Maybe the documentation on these new language features could do a better job at explaining what sorts of considerations we need to take into account when using them in our applications.

kick it on DotNetKicks.com


C# Enum Craziness: Sometimes What You Expect Isn’t The Case

January 28, 2008

I learned something new today about enums that I find really weird. Lets start with the following test enum:

public enum Action
{
    Run = 2,
    Walk = 4,
    Crawl = 8
}

and then some code to do something with that enum:

static void Main(string[] args)
{
    Console.WriteLine((Action)2);
    Console.WriteLine((Action)4);
    Console.WriteLine((Action)8);
    Console.WriteLine((Action)10);
}

What do you think will happen here? Will it even compile?

When I saw this code snippet I said to myself, “the first three lines look ok but the last line won’t work because 10 isn’t a valid value for this enum. Well, I was wrong. This program actually outputs:

Run
Walk
Crawl
10

Huh?!? How could this be? 10 isn’t a valid value according to my enum definition!

This Should Never Happen…Right?

Ok, lets try something different. How about a method?

public static void Execute(Action action)
{
    switch (action)
    {
        case Action.Run:
            Console.WriteLine("Running");
            break;
        case Action.Walk:
            Console.WriteLine("Walking");
            break;
        case Action.Crawl:
            Console.WriteLine("Crawling");
            break;
        default:
            Console.WriteLine("This will never happen! {0}", action);
            break;
    }
}

Surely you can’t pass anything into this method other than one of the 3 values defined in my enum. So lets run some code:

static void Main(string[] args)
{
    Execute(Action.Run);
    Execute(Action.Walk);
    Execute((Action)55);
}

What do you think this does? Well, it outputs:

Running
Walking
This will never happen! 55

Just about this time you must be thinking: “This has to be a bug!”. Well, it is not. It is by design. here is the excerpt from the C# design spec:

14.5 Enum values and operations

Each enum type defines a distinct type; an explicit enumeration conversion (Section 6.2.2) is required to convert between an enum type and an integral type, or between two enum types. The set of values that an enum type can take on is not limited by its enum members. In particular, any value of the underlying type of an enum can be cast to the enum type, and is a distinct valid value of that enum type.

Wow, that is not at all what I expected when it comes to limiting the possible values of enums. So what are enums good for then? Are they just for code readability? Between this little revelation and my previous hack to associate string values to enums, I am loosing faith in enums.

Can You Handle it?

So what is the best way to check for this in your methods? Should you throw an exception if you receive an enum value you were not expecting? Should you just ignore it? Well, lets see what the .Net base class libraries do.

First, lets start with the System.IO.File class. What happens if I run the following code?

File.Open(@"C:\temp\test.txt", (FileMode)500);

Well, it throws a System.ArgumentOutOfRangeException with the message “Enum value was out of legal range.”. Ok, makes sense – I did pass in something out of range.

Lets try reflection. What does the following code snippet output?

PropertyInfo[] info = typeof(StringBuilder)
.GetProperties((BindingFlags)303); Console.WriteLine(info.Length);

Well, it just outputs ‘0’ which means in this case it is just being ignored and an empty array is returned.

What about System.String? Lets try this code snippet:

"TestString".Equals("TESTSTRING", (StringComparison)245);

Well, it turns out this throws an exception but instead of being a System.ArgumentOutOfRangeException like System.IO did, it throws a System.ArgumentException with the message “The string comparison type passed in is currently not supported.” Ok, so this is kind of the same but still a little inconsistent if you ask me.

Is Anything Safe These Days?

So what is a developer to do? Obviously you need to be aware of this when you are receiving enum types from publicly facing code. It seems there is no clear guidance on this that I can find. The C# design spec explains the behavior but doesn’t really give any guidance on why or how this should be handled. So the only other place to turn for guidance is one of my favorite .Net books Framework Design Guidelines by Brad Abrams and Krzysztof Cwalina. On their section on enums, I can’t find any guidance on how to handle out of range enum values. I do, however, find guidance that we should be using enums:

DO use an enum to strongly type parameters, properties, and return values that represent sets of values

They also suggest that enums should be favored over static constants:

DO favor using an enum over static constants

And Jeffrey Richter (and while I am pointing out my favorite .Net books, I have to add Mr. Richter’s book CLR via C# which contains priceless information on the CLR that you can’t find anywhere else) adds the following commentary:

An enum is a structure with a set of static constants. The reason to follow this guideline is because you will get some additional compiler and reflection support if you define an enum versus manually defining a structure with static constants.

So I guess we do continue to use enums and just know that we can’t always trust their values to be valid. Do you think the C# design spec should be amended to include a recommended behavior for out of range enum values? Perhaps Brad and Krzysztof can include something in their second edition of Framework Design Guidelines.

kick it on DotNetKicks.com


Associating Strings with enums in C#

January 17, 2008

I have seen other great articles out lining the benefits of some pretty clever and useful helper classes for enums. Many of these methods almost exactly mirror methods I had written in my own EnumHelper class. (Isn’t it crazy when you imagine how much code duplication there must be like this out there!)

One thing that I don’t see emphasized much is trying to associated string values with enums. For example, what if you want to have a Drop Down list that you can choose from a list of values (which are backed by an enum)? Lets start with a test enum:

public enum States
{
    California,
    NewMexico,
    NewYork,
    SouthCarolina,
    Tennessee,
    Washington
}

So if you made a drop down list out of this enum, using the ToString() method, you would get a drop down that looks like this:

image

While most people will understand this, it should really be displayed like this:

image

“But enums can’t have spaces in C#!” you say. Well, I like to use the System.ComponentModel.DescriptionAttribute to add a more friendly description to the enum values. The example enum can be rewritten like this:

public enum States
{
    California,
    [Description("New Mexico")]
    NewMexico,
    [Description("New York")]
    NewYork,
    [Description("South Carolina")]
    SouthCarolina,
    Tennessee,
    Washington
}

Notice that I do not put descriptions on items where the ToString() version of that item displays just fine.

How Do We Get To the Description?

Good question! Well, using reflection of course! Here is what the code looks like:

public static string GetEnumDescription(Enum value)
{
    FieldInfo fi = value.GetType().GetField(value.ToString());

    DescriptionAttribute[] attributes =
        (DescriptionAttribute[])fi.GetCustomAttributes(
        typeof(DescriptionAttribute),
        false);

    if (attributes != null &&
        attributes.Length > 0)
        return attributes[0].Description;
    else
        return value.ToString();
}

This method first looks for the presence of a DescriptionAttribute and if it doesn’t find one, it just returns the ToString() of the value passed in. So

GetEnumDescription(States.NewMexico);

returns the string “New Mexico”.

A Free Bonus: How to Enumerate Enums

Ok, so now we know how to get the string value of an enum. But as a free bonus, I also have a helper method that allows you to enumerate all the values of a given enum. This will allow you to easily create a drop down list based on an enum. Here is the code for that method:

public static IEnumerable<T> EnumToList<T>()
{
    Type enumType = typeof(T);

    // Can't use generic type constraints on value types,
    // so have to do check like this
    if (enumType.BaseType != typeof(Enum))
        throw new ArgumentException("T must be of type System.Enum");

    Array enumValArray = Enum.GetValues(enumType);
    List<T> enumValList = new List<T>(enumValArray.Length);

    foreach (int val in enumValArray)
    {
        enumValList.Add((T)Enum.Parse(enumType, val.ToString()));
    }

    return enumValList;
}

As you can see, the code for either of these methods isn’t too complicated. But used in conjunction, they can be really useful. Here is an example of how we would create the drop down list pictured above based on our enum:

DropDownList stateDropDown = new DropDownList();
foreach (States state in EnumToList<States>())
{
    stateDropDown.Items.Add(GetEnumDescription(state));
}

Pretty simple huh? I hope you find this as useful as I do.

One More Example

There is one more scenario that I often find myself needing to associate string values with enums – when dealing with legacy constant string based system. Lets say you have a library that has the following method:

public void ExecuteAction(int value, string actionType)
{
    if (actionType == "DELETE")
        Delete();
    else if (actionType == "UPDATE")
        Update();
}

(I tried to make this look as legacy as I could for a contrived example). What happens if somebody passes in “MyEvilAction” as a value for actionType? Well, whenever I see hard coded strings, that is a code smell that could possibly point to the use of enums instead. But sometimes you don’t have control over legacy code and you have to deal with it. So you could make an enum which looks like this:

public enum ActionType
{
    [Description("DELETE")]
    Delete,
    [Description("UPDATE")]
    Update
}

(I know, I know, this is a very contrived example) Then you could call the ExecuteAction Method like this:

ExecuteAction(5, GetEnumDescription(ActionType.Delete));

This at least makes the code more readable and may also make it more consistent and secure.

kick it on DotNetKicks.com


Finally – Asp.Net MVC

December 10, 2007

For those of you who have been following the new Asp.Net MVC (Model View Controller) framework know how much developers have been chomping at the bit (no pun intended) to try out new framework. Well, that day has finally arrived!

ASP.NET 3.5 Extensions preview

For those of you who haven’t been following the development of this framework:

ASP.NET MVC is an architecture that enables you to easily maintain separation of concerns in your applications, as well as facilitate clean testing and test driven development.

You can read some great articles on Scott Guthrie’s blog and Rob Conery’s blog.

I definitely plan on checking it out and posting my thoughts and feedback.


Asp.Net Control For Google Charts

December 9, 2007

Google has launched a new service which allows you to very simply build charts for any web application. Their design decision to make an interface to their service which is all based on the format of the query string is very interesting to me. Basically, you send them a Url and they return a PNG image of a chart. The little query string language they use for the charts is not the cleanest thing in the world but it is simple which is nice.

Here is the official description:

The Google Chart API is an extremely simple tool that lets you easily create a chart from some data and embed it in a webpage. You embed the data and formatting parameters in an HTTP request, and Google returns a PNG image of the chart. Many types of chart are supported, and by making the request into an image tag you can simply include the chart in a webpage.

A Weekend Coding Project

I really enjoy building custom Asp.Net web controls so I decided to try my hand at creating a control to wrap the Google chart functionality. Now, this is really just a first pass so it is definitely not bug free or feature complete. I just figure I can put it out there and get some feedback to see if I should continue in the same direction or to see if I am way off. Also, I can use this as an opportunity to share a little about how I approach web control design.

Note: I realize that I am not the first or only one to get the idea to create a Google chart Asp.Net control, but since my approach is so different, I figured I would go ahead and submit my own.

So Many Classes

Maybe I am a little too obsessive or just set in my ways, but it always seems like when I set out to tackle a problem like this, there is a set of helper/utility classes that I prefer to use. Over time this creates a sticky situation because I have so many copies of the same or similar classes laying around it becomes a maintenance nightmare. I have considered just creating my own utility assembly but then people would have to deploy that with any solution that they use my code in. I prefer to release my code as a self contained assembly whenever possible so I guess I will just put up with it for now.

Here is a quick list of the classes I used on this project:

image

I use ColorHelper to translate System.Drawing.Color objects to and from their hexadecimal string counterparts. EnumHelper contains methods for getting Description attributes associated to enums as well as some parsing methods similar to these. I have written about DelimitedList and the state managed classes before. I make heavy use of QueryStringHelper to build the query string which represents the chart data.

Heavy on the Declarative Syntax

I have approached this problem by trying to represent the entire chart declaratively right in the aspx code. Here is an example of a line chart:

<web:Chart ID="chart" runat="server" Width="300px" Height="150px"
    Title="Transportation" Type="Line" EnableLegend="true">
    <DataSets>
        <web:DataSet Color="ForestGreen" Label="Cars">
            <web:DataPoint Value="5" />
            <web:DataPoint Value="9" />
            <web:DataPoint Value="21" />
            <web:DataPoint Value="30" Marker="Arrow" />
            <web:DataPoint Value="25" />
            <web:DataPoint Value="36" />
        </web:DataSet>
        <web:DataSet Color="Red" Label="Trucks">
            <web:DataPoint Value="7" />
            <web:DataPoint Value="3" />
            <web:DataPoint Value="13" />
            <web:DataPoint Value="13" />
            <web:DataPoint Value="16" />
            <web:DataPoint Value="25" />
        </web:DataSet>
        <web:DataSet Color="Orange" Label="Motorcycles">
            <web:DataPoint Value="18" />
            <web:DataPoint Value="25" />
            <web:DataPoint Value="18" />
            <web:DataPoint Value="13" />
            <web:DataPoint Value="25" />
            <web:DataPoint Value="23" />
        </web:DataSet>
    </DataSets>
</web:Chart>

As you can see, a chart with any sizable amount of data can become very cumbersome. The output of this chart looks like this:

Transportation

The url used to generate this chart looks like this:

http://chart.apis.google.com/chart?chd=t:5,9,21,30,25,36|7,3,13,13,16,25|18,25,18,13,25,23
&cht=lc&chs=300×150&chco=228b22,ff0000,ffa500
&chm=a,000000,0,3,10&chtt=Transportation&chdl=Cars|Trucks|Motorcycles

You can change the type of chart being generated through the Type property. Here is an example of a bar chart:

A Venn Diagram:

Pie Chart:

Feedback Please

What I really need right now is some feedback. Do you find this helpful at all? Is my design way off? What would be the ideal way to build charts using the Google Api?

Please – download the library, try it out. Download the source and check it out. Post comments letting me know what you think.

Also, let me know if you would like me to write more about the details of the code or the usage of the control.

Source: Google Chart Source Assembly: Google Chart Assembly

kick it on DotNetKicks.com


Asp.Net Ajax: How do you know all of your Ajax calls have completed?

August 22, 2007

Some Background

Here is a situation I ran into recently: I have an Asp.Net Ajax service (.asmx) which has a method used to create rows in a database similiar to:

[WebMethod]
public int CreateItem(string description, double value)

This method basically created a row in the database and returns the id of the item created. It is called from javascript like this:

function MyButtonClicked() {
    MyNamespace.Service.CreateItem("My Item", 5.3, OnSucceeded, OnFailed);
}

function OnSucceeded(result, userContext, methodName) {
}

function OnFailed(error, userContext, methodName) {
}

If you are not familiar with calling Ajax web services from javascript, you can read more from the Calling Web Services from Client Script in ASP.NET AJAX article on asp.net. Basically, I am just calling my CreateItem method (which is part of a .asmx service) from javascript – how this magic happens isn’t really too important to understand the problem I was facing.

Notice that the last two parameters I passed into my CreateItem call in javascript are references to callback methods that should be called in the event that my call succeeded or failed. A successful call will end up calling back my OnSucceeded method which has 3 parameters:

  • result – this contains any values returned from my method. In this case it will be the id of the item that was inserted.
  • userContext – this is an optional item that can be passed into the original call to the method and will just be passed on to the callback method. I am not using this parameter in this situation
  • methodName – this is the name of the method that was called which resulted in the OnSucceeded method being called. In my case, this will be the string “CreateItem”

Now for the Problem

Ok, now with all that background I can actually spell out the problem I had to solve. In many cases, I make calls to the CreateItem method many times in a row. So if I need to create three items in the database, I may do something like this in javascript:

MyNamespace.Service.CreateItem("My first Item", 1, OnSucceeded, OnFailed);
MyNamespace.Service.CreateItem("My second Item", 2, OnSucceeded, OnFailed);
MyNamespace.Service.CreateItem("My third Item", 3, OnSucceeded, OnFailed);

Notice that all three calls have the same callback on success. Now lets say I want to create these three items and then as soon as all three calls are completed successfully I want to forward the user to another page. At first you may think I can just do something like this:

function OnSucceeded(result, userContext, methodName) {
    if (methodName == "CreateItem") {
        window.location = "NextPage.aspx";
    }
}

This would forward the user to a new page as soon as our call to CreateItem has completed successfully. There is one huge problem though: how do we know which call to CreateItem succeeded? The first one? The Second one? The third one?

Obviously we only want to forward the user in the case that our final call has succeeded. As you can see, this problem only gets worse if we have other calls on the same page that could be pending at the time we want to forward the user to the next page.

My Solution – Count Your Calls

So here is how I solved the problem: I made a variable which I would use to keep track of how many pending calls there were at any given time. At the top of my javascript page I first declare:

var pendingCalls = 0;

Then, each time I make a call, I increment the counter:

++pendingCalls;
MyNamespace.Service.CreateItem("My first Item", 1, OnSucceeded, OnFailed);
++pendingCalls;
MyNamespace.Service.CreateItem("My second Item", 2, OnSucceeded, OnFailed);
++pendingCalls;
MyNamespace.Service.CreateItem("My third Item", 3, OnSucceeded, OnFailed);

Then I need to make sure I decrement the counter each time a call succeeds or fails:

function OnSucceeded(result, userContext, methodName) {
    --pendingCalls;
}

function OnFailed(error, userContext, methodName) {
    --pendingCalls;
}

This way, we know how many pending calls we have at any given time. Then to forward the user to a new page once all calls are completed, I did the following:

function OnSucceeded(result, userContext, methodName) {
    --pendingCalls;
    if (methodName == "CreateItem" && pendingCalls == 0) {
        window.location = "NextPage.aspx";
    }
}

As you can see, asynchronous programming can be very difficult. Has anybody else had to solve this problem? If so, how did you do it?


Using JQuery to Make Asp.Net Play Nice with Asp.Net

August 20, 2007

I have found that developing an Asp.Net application that makes heavy use of javascript is very difficult. One of the major pain points is dealing with the horrendously long ids that Asp.Net server controls generates. For example, the following Asp.Net markup:

<asp:TextBox ID="myTextBox" runat="server" />

can cause the TextBox to be rendered as:

<input name="ctl00$ContentPlaceHolder1$myTextBox" 
    type="text" id="ctl00_ContentPlaceHolder1_myTextBox" />

if the page uses a MasterPage or any moderately complex control hierarchy. Asp.Net does this to minimize naming conflicts. This is the whole concept of a Naming Container. Asp.Net uses the interface INamingContainer to mark controls that can be rendered inside of a naming container. The end result is a really long id that can change based on the items place in the control hierarchy. This is a problem if you want to use the elements rendered by server controls in your client side javascript code.

First, the Hack

One technique for getting around this I have seen, has a definite bad code smell to it. You basically have Asp.Net render the itemsID inside your javascript code using the <% %> syntax like so:

<script type="text/javascript">
    var myValue = document.getElementById('<%= myTextBox.ClientID %>');
</script>

this will render as

<script type="text/javascript">
    var myValue = document.getElementById('ctl00_ContentPlaceHolder1_myTextBox');
</script>

This technique has a couple of huge short commings:

  • It is really hard to maintain
  • It doesn’t support having your javascript in a library or a separte .js file

JQuery to the Rescue…sort of

There are many javascript client libraries out there that help ease the headache of writing javascript code that works across browsers. I am currently working on a project where we have decided to use JQuery which I am very impressed with.

One of the main features of using the JQuery library, is its use of selectors to find elements on the page. I will not get that deeply into this rich DOM querying language here, but you can read all about it from the JQuery Documentation.

JQuery selectors allow me to find items in the page many different ways. For example, I can find the textbox in my example above using the following code:

var myTextBox = $("input:text");

This will actually find every textbox on the page but since we have only on textbox on our page, it works. Alternatively, you can use something like the Css class of the item. Suppose we gave the textbox a css class of ‘textInput’:

<asp:TextBox ID="myTextBox" runat="server" CssClass="textInput" />

Now, the control renders as

<input name="ctl00$ContentPlaceHolder1$myTextBox" 
    type="text" id="ctl00_ContentPlaceHolder1_myTextBox" 
    class="textInput" />

so we can use JQuery to select just this item using the following javascript

var myTextBox = $("input:text[@class=textInput]");

or

var myTextBox = $("input.textInput");

or even simpler

var myTextBox = $(".textInput");

There are many ways to select the same item, but you get the idea.

So does this solve our problem with long IDs? No, not really, we would still have to use the following javascript to select this item by id:

var myTextBox = $("#ctl00_ContentPlaceHolder1_myTextBox");

The Solution

As I was working through this problem in my head, I remembered somethign about javascript: it is more than just the attributes that already exist in DOM elements. You are not restricted to just using ID or Class. So why not just come up with your own attribute to use just for the purpose of selecting DOM elements? So here is a pretty workable solution I came up with:

<asp:TextBox ID="myTextBox" runat="server" ClientSelector="myTextBox" />

I just use a made up attribute called ‘ClientSelector’ (you can use whatever you fancy and it renders like this:

<input name="ctl00$ContentPlaceHolder1$myTextBox" type="text" 
    id="ctl00_ContentPlaceHolder1_myTextBox" 
    ClientSelector="myTextBox" />

so now we can use JQuery to select the item with this statement

var myTextBox = $("input[@ClientSelector=myTextBox]");

So now we can use the same selector for just this one textbox no matter if it is in a MasterPage or not. We don’t have to care how the id renders anymore. I have found this technique very useful. What do you think? Is this also too much of a hack?