Content Security Policy Level 2 specification defines a mechanism for providing policies around sources from which the application will be loading resources. This allows for better protection against many different injection vulnerabilities.

In this post I'm going to show how you can use it with ASP.NET MVC in order to mitigate cross-site scripting (XSS) attacks by defining trusted sources for running scripts. For our example we will use a page which contains reference to three script files and one inline script.

<script src="//cdnjs.cloudflare.com/ajax/libs/jquery/1.11.3 jquery.min.js"></script>
<script src="//cdnjs.cloudflare.com/ajax/libs/highlight.js/8.5/highlight.min.js"></script>
<script src="~/Content/js/marked-0.3.3.min.js"></script>
<script>
    $(document).ready(function () {
        ...
    });
</script>

Delivering Policy

Delivering a policy should be done via Content-Security-Policy response header, which should contain a semicolon (;) delimited list of directives. Typically every directive consists of directive name and space separated source list. In our case we want to use script-src directive which refers to scripts. If the page above is being returned from ASP.NET MVC action, we can easily inject the header with an action filter.

public sealed class ContentSecurityPolicyAttribute : FilterAttribute, IActionFilter
{
    public string ScriptSource { get; set; }

    public void OnActionExecuted(ActionExecutedContext filterContext)
    {
        return;
    }

    public void OnActionExecuting(ActionExecutingContext filterContext)
    {
        StringBuilder policyBuilder = new StringBuilder();

        if (!String.IsNullOrWhiteSpace(ScriptSrc))
        {
            policyBuilder.AppendFormat("script-src {0};", ScriptSource);
        }

        if (policyBuilder.Length &gt; 0)
        {
            filterContext.HttpContext.Response.AppendHeader("Content-Security-Policy", policyBuilder.ToString());
        }
    }
}

In order to test this attribute we will apply it to the action with 'none' in the source list - this should result in refusing all scripts.

public class HomeController : Controller
{
    [ContentSecurityPolicy(ScriptSource = "'none'")]
    public ActionResult Index()
    {
        ...
        return View();
    }
}

If we navigate to our action in Chrome we should see something similar to the screenshot below in the console.

Content Security Policy Errors

Now we can set the correct source list, which will allow the resources we want. Allowing requests to the domain of the application can be done by using 'self' as one of the sources.

[ContentSecurityPolicy(ScriptSource = "'self' cdnjs.cloudflare.com")]

This will remove all "Refused to load" errors leaving only "Refused to execute".

Inline Execution

The Content Security Policy mechanism provides three ways for allowing inline execution:

  • Adding 'unsafe-inline' as a source, which allows all inline execution.
  • Whitelisting scripts by using a randomly generated nonce.
  • Whitelisting scripts by specifying its hash as an allowed source of script

First one is self-explanatory and provides no security so I should focus on two remaining ones. For the nonce mechanism to work application needs to generate a unique value for every request, which should be added to script source list as 'nonce-$RANDOM'. Let’s change the filter to handle this.

public enum ContentSecurityPolicyInlineExecution
{
    Refuse,
    Unsafe,
    Nonce,
    Hash
}

public sealed class ContentSecurityPolicyAttribute : FilterAttribute, IActionFilter
{
    public string ScriptSource { get; set; }

    public ContentSecurityPolicyInlineExecution ScriptInlineExecution { get; set; }

    public ContentSecurityPolicyAttribute()
    {
        ScriptInlineExecution = ContentSecurityPolicyInlineExecution.Refuse;
    }

    ...

    public void OnActionExecuting(ActionExecutingContext filterContext)
    {
        StringBuilder policyBuilder = new StringBuilder();

        if (!String.IsNullOrWhiteSpace(ScriptSource) || (ScriptInlineExecution != ContentSecurityPolicyInlineExecution.Refuse))
        {
            policyBuilder.Append("script-src");

            if (!String.IsNullOrWhiteSpace(ScriptSource))
            {
                policyBuilder.AppendFormat(" {0}", ScriptSource);
            }

            filterContext.HttpContext.Items["ScriptInlineExecution"] = ScriptInlineExecution;
            switch (ScriptInlineExecution)
            {
                case ContentSecurityPolicyInlineExecution.Unsafe:
                    policyBuilder.Append(" 'unsafe-inline'");
                    break;
                case ContentSecurityPolicyInlineExecution.Nonce:
                    string nonceRandom = Guid.NewGuid().ToString("N");
                    filterContext.HttpContext.Items["NonceRandom"] = nonceRandom;
                    policyBuilder.AppendFormat(" 'nonce-{0}'", nonceRandom);
                    break;
                default:
                    break;
            }

            policyBuilder.Append(";");
        }

        if (policyBuilder.Length &gt; 0)
        {
            filterContext.HttpContext.Response.AppendHeader("Content-Security-Policy", policyBuilder.ToString());
        }
    }
}

The filter is now storing the information that we are using nonce as well as its random value in HttpContext.Items. This is needed because this random value must be added through nonce attribute to every script element which we want to allow. This can be easily done with proper HtmlHelper extension.

public static class ContentSecurityPolicyExtensions
{
    public static IDisposable BeginCspScript(this HtmlHelper htmlHelper)
    {
        return new ContentSecurityPolicyScript(htmlHelper.ViewContext);
    }

    private class ContentSecurityPolicyScript : IDisposable
    {
        private readonly ViewContext _viewContext;
        private readonly ContentSecurityPolicyInlineExecution _scriptInlineExecution;
        private readonly TagBuilder _scriptTag;

        public ContentSecurityPolicyScript(ViewContext context)
        {
            _viewContext = context;

            _scriptInlineExecution = (ContentSecurityPolicyInlineExecution)_viewContext.HttpContext.Items["ScriptInlineExecution"];

            _scriptTag = new TagBuilder("script");
            if (_scriptInlineExecution == ContentSecurityPolicyInlineExecution.Nonce)
            {
                _scriptTag.MergeAttribute("nonce", (string)_viewContext.HttpContext.Items["NonceRandom"]);
            }

            _viewContext.Writer.Write(_scriptTag.ToString(TagRenderMode.StartTag));
        }

        public void Dispose()
        {
            _viewContext.Writer.Write(_scriptTag.ToString(TagRenderMode.EndTag));
        }
    }
}

Using this helper might be a little tricky because Razor may not be able to switch properly to HTML mode by itself. This is why we will use <text> pseudo element.

@using (Html.BeginCspScript())
{
    <text>
    $(document).ready(function () {
        ...
    });
    </text>
}

Now we just need to enable the mechanism on our action and the browser will execute our inline script.

[ContentSecurityPolicy(ScriptSource = "'self' cdnjs.cloudflare.com", ScriptInlineExecution = ContentSecurityPolicyInlineExecution.Nonce)]

The hash based mechanism is a little bit more sophisticated. In order to whitelist a script application needs to provide its hash as a part of the source list in 'hashAlgorithm-hashValue' format. Allowed hash algorithms include SHA256, SHA384 and SHA512. The computed hash should be provided in Base64 encoded form. Implementing support for this will be a more tricky. The hashes can be computed not sooner than during the result processing, so our action filter needs to leave a placeholder for them and add them after executing the result.

public sealed class ContentSecurityPolicyAttribute : FilterAttribute, IActionFilter, IResultFilter
{
    ...

    public void OnActionExecuting(ActionExecutingContext filterContext)
    {
        StringBuilder policyBuilder = new StringBuilder();

        if (!String.IsNullOrWhiteSpace(ScriptSource) || (ScriptInlineExecution != ContentSecurityPolicyInlineExecution.Refuse))
        {
            policyBuilder.Append("script-src");

            ...

            filterContext.HttpContext.Items[ScriptInlineExecutionContextKey] = ScriptInlineExecution;
            switch (ScriptInlineExecution)
            {
                ...
                case ContentSecurityPolicyInlineExecution.Hash:
                    filterContext.HttpContext.Items["ScriptHashListBuilder"] = new StringBuilder();
                    policyBuilder.Append("<scripthashlistplaceholder>");
                    break;
                default:
                    break;
            }

            policyBuilder.Append(";");
        }

        ...
    }

    public void OnResultExecuted(ResultExecutedContext filterContext)
    {
        if (ScriptInlineExecution == ContentSecurityPolicyInlineExecution.Hash)
        {
            filterContext.HttpContext.Response.Headers["Content-Security-Policy"] = filterContext.HttpContext.Response.Headers["Content-Security-Policy"].Replace("<scripthashlistplaceholder>", ((StringBuilder)filterContext.HttpContext.Items["ScriptHashListBuilder"]).ToString());
        }
    }

    public void OnResultExecuting(ResultExecutingContext filterContext)
    { }
}

The hashes itself can be computed by the helper we have already written - it can access the internal StringBuilder of the writer in order to extract the content which needs hashing. All that needs to be done is modifying ContentSecurityPolicyScript class.

private class ContentSecurityPolicyScript : IDisposable
{
    ...
    private readonly int _viewBuilderIndex;

    public ContentSecurityPolicyScript(ViewContext context)
    {
        _viewContext = context;

        _scriptInlineExecution = (ContentSecurityPolicyInlineExecution)_viewContext.HttpContext.Items[ContentSecurityPolicyAttribute.ScriptInlineExecutionContextKey];

        _scriptTag = new TagBuilder("script");
        if (_scriptInlineExecution == ContentSecurityPolicyInlineExecution.Nonce)
        {
            _scriptTag.MergeAttribute("nonce", (string)_viewContext.HttpContext.Items[ContentSecurityPolicyAttribute.NonceRandomContextKey]);
        }

        _viewContext.Writer.Write(_scriptTag.ToString(TagRenderMode.StartTag));

        if (_scriptInlineExecution == ContentSecurityPolicyInlineExecution.Hash)
        {
            _viewBuilderIndex = ((StringWriter)_viewContext.Writer).GetStringBuilder().Length;
        }
    }

    public void Dispose()
    {
        if (_scriptInlineExecution == ContentSecurityPolicyInlineExecution.Hash)
        {
            StringBuilder viewBuilder = ((StringWriter)_viewContext.Writer).GetStringBuilder();
            string scriptContent = viewBuilder.ToString(_viewBuilderIndex, viewBuilder.Length - _viewBuilderIndex).Replace("\r\n", "\n");
            byte[] scriptHashBytes = new SHA256Managed().ComputeHash(Encoding.UTF8.GetBytes(scriptContent));
            string scriptHash = Convert.ToBase64String(scriptHashBytes);
            ((StringBuilder)_viewContext.HttpContext.Items[ContentSecurityPolicyAttribute.ScriptHashListBuilderContextKey]).AppendFormat(" 'sha256-{0}'", scriptHash);
        }
        _viewContext.Writer.Write(_scriptTag.ToString(TagRenderMode.EndTag));
    }
}

You can notice that I'm replacing \r\n with \n. Specification doesn't mention anything about this, but tests show that both Firefox and Chrome are doing that. Without this replace the hash would never match for multiple lines scripts. This way we have all mechanisms available.

The created components allow usage of Content Security Policy in ASP.NET MVC application without much effort. Both classes are available as part of Lib.Web.Mvc and will be extended with more directives before its next release.

This is a second time I've been asked by Packt Publishing to review one of their books. Previously it was Instant jqGrid by Gabriel Manricks, this time Learning JavaScriptMVC by Wojciech Bednarski. I don’t use JavaSciptMVC in my daily work, I'm more AngularJS or Knockout guy. This allowed me to approach the book with fresh and open mind.

Learning JavaScriptMVC Cover The book is 124 pages long, this gives about 97 pages of actual content. The author makes his goal to use this space for walking reader through main parts of the framework and leaving him with enough knowledge to start own projects. Let’s see how he tries to achieve this goal.

The book is divided into six chapters. The first one contains usual introduction: general framework description, installation instructions and setup of first demo application which will be extended through entire book. After that the author is taking the reader through core parts of the JavaScriptMVC framework by devoting each one a separate chapter:

  • DocumentJS - powerful documentation generation utility
  • FuncUnit - QUnit based functional testing framework
  • jQueryMX - core of JavaScriptMVC, which provides model-view-controller pattern implementation and a lot of utilities
  • StealJS - independent code manager and packaging tool

The last chapter is devoted for creating real-life application. Author is bringing the user through the all high level parts while leaving low level implementation details as a homework for reader.

The biggest advantage of the book is the way of describing the modules of JavaScriptMVC framework. Every one of them is being described as independent tool, so the knowledge which reader gains can be used even when you don't want to use the entire framework. Unfortunately the author is very hasty with demos. The code is lacking proper description, at some moments the reader can feel that the author just keeps throwing more and more lines at him just with purpose of copying them into the editor.

The book provides a well-organized material on the JavaScriptMVC framework which is enough to get one started. From that point the reader should be comfortable enough to use all the online available resources in order to paw his way through the framework details.

I've started playing with algorithms recently. I've felt a little bit rusty (I must admit that in my daily work I don't have to many occasions on which I would need to write some smart algorithms) so I decided to find some time in order to solve one former Codility challenge every week (you can find the solutions in my GitHub repository). Recently Codility have started publishing a programming course and some "further training" tasks. As I was looking through those one have dragged mine attention: Array Inversions Count. The task requirements are clearly screaming Mergesort, so I thought it is a great chance to see how good can I implement an old school algorithm in C#.

Codility is running its tests on Mono platform. I have done some testing which confirms the same behavior on Microsoft .Net Framework but I still feel that I should note this.

Mergesort is a classic algorithm which follows divide and conquer paradigm. It boils down to splitting and merging arrays with some reorganization along the way. I doubt I can explain it better than Wikipedia does, so I will go straight to the first implementation that came into my mind.

private int[] MergeSort(int[] A, out int inversionsCount)
{
  inversionsCount = 0;

  int[] sortedArray;
  if (A.Length == 1)
    sortedArray = A;
  //If the length of input array is greater than one
  else
  {
    int lowerArrayInversionsCount = 0, upperArrayInversionsCount = 0;
    int lowerArrayIndex = 0, upperArrayIndex = 0;

    //Split the array into lower half
    int[] lowerArray = new int[A.Length / 2];
    Array.Copy(A, lowerArray, lowerArray.Length);
    //Which will be sorted recursively
    lowerArray = MergeSort(lowerArray, out lowerArrayInversionsCount);

    //And upper half
    int[] upperArray = new int[A.Length - lowerArray.Length];
    Array.Copy(A, lowerArray.Length, upperArray, 0, upperArray.Length);
    //Which also will be sorted recursively
    upperArray = MergeSort(upperArray, out upperArrayInversionsCount);

    inversionsCount = lowerArrayInversionsCount + upperArrayInversionsCount;

    //The sorted array is a result of proper merging of the lower and upper half
    sortedArray = new int[A.Length];
    lowerArrayIndex = 0;
    upperArrayIndex = 0;
    //For all the elements
    for (int sortedArrayIndex = 0; sortedArrayIndex &lt; sortedArray.Length; sortedArrayIndex++)
    {
      //If all the elements from lower half has been copied
      if (lowerArrayIndex == lowerArray.Length)
        //Copy the element from upper half
        sortedArray[sortedArrayIndex] = upperArray[upperArrayIndex++];
      //If all the elements from upper half has been copied
      else if (upperArrayIndex == upperArray.Length)
        //Copy the element from lower half
        sortedArray[sortedArrayIndex] = lowerArray[lowerArrayIndex++];
      //If the current element in lower half is less or equal than current element in upper half
      else if (lowerArray[lowerArrayIndex] &lt;= upperArray[upperArrayIndex])
        //Copy the element from lower half
        sortedArray[sortedArrayIndex] = lowerArray[lowerArrayIndex++];
      //In any other case
      else
      {
        //Copy the element from upper half
        sortedArray[sortedArrayIndex] = upperArray[upperArrayIndex++];
        //And count the inversions
        inversionsCount += lowerArray.Length - lowerArrayIndex;
      }
    }
  }

  return sortedArray;
}

The above implementation gives following results on Codility:

Mergesort with Array.Copy array copying

So the implementation is correct. Now it is time for the fun part, how can I make it faster? There is one place where I'm letting framework do some work for me - calling Array.Copy while creating lower and upper arrays. The .Net Framework provides also a Buffer.BlockCopy method which is a more low-level way of copying arrays and as such might prove to give better performance. So I've changed calls to Array.Copy into proper calls to Buffer.BlockCopy.

private const int _intSize = 4;

public int[] MergeSort(int[] A, out int inversionsCount)
{
  ...

  //Array.Copy(A, lowerArray, lowerArray.Length);
  Buffer.BlockCopy(A, 0, lowerArray, 0, lowerArray.Length * _intSize);
  ...

  //Array.Copy(A, lowerArray.Length, upperArray, 0, upperArray.Length);
  Buffer.BlockCopy(A, lowerArray.Length * _intSize, upperArray, 0, upperArray.Length * _intSize);
  ...
}

The results for this implementation are exactly the same as for the Array.Copy. In other words there is no noticeable performance gain or loss. In that case I decided I would like to see what happens if I copy the arrays myself (with good old for loop). Quick change in the source code.

public int[] MergeSort(int[] A, out int inversionsCount)
{  
  ...

  //Array.Copy(A, lowerArray, lowerArray.Length);
  for (lowerArrayIndex = 0; lowerArrayIndex &lt; lowerArray.Length; lowerArrayIndex++)
    lowerArray[lowerArrayIndex] = A[lowerArrayIndex];
  ...

  //Array.Copy(A, lowerArray.Length, upperArray, 0, upperArray.Length);
  for (upperArrayIndex = 0; upperArrayIndex &lt; upperArray.Length; upperArrayIndex++)
    upperArray[upperArrayIndex] = A[lowerArrayIndex + upperArrayIndex];
  ...
}

And voila.

Mergesort with "for-loop" array copying

Honestly I wasn't expecting this. The implementation shows a little bit better performance with big sets of data. So the for loop seems to be copying arrays faster than framework functions in this case.

All this focus on copying arrays made me wonder; Do I even need to copy those arrays in the first place? It is probably enough to keep track of witch part of the original array I need to work on. This thought brought me to the following solution.

private int[] MergeSort(int[] A, int startIndex, int endIndex, out int inversionsCount)
{  
  inversionsCount = 0;

  int[] sortedArray;
  //If the part of the array that needs sorting is just one element
  if (startIndex &gt;= endIndex)
    //Return new array
    sortedArray = new int[] { A[endIndex] };
  //Otherwise
  else
  {  
    //Calculate the index of the last element in the lower subpart of the array
    int lowerArrayEndIndex = (startIndex + endIndex) / 2;
    int lowerArrayInversionsCount = 0, upperArrayInversionsCount = 0;

    //Recursively sort the lower and upper subparts of the array
    int[] lowerArray = MergeSort(A, startIndex, lowerArrayEndIndex, out lowerArrayInversionsCount);
    int[] upperArray = MergeSort(A, lowerArrayEndIndex + 1, endIndex, out upperArrayInversionsCount);

    inversionsCount = lowerArrayInversionsCount + upperArrayInversionsCount;

    //The result will be new array, which will be creating by merging lower and upper subpart
    sortedArray = new int[endIndex - startIndex + 1];
    int lowerArrayIndex = 0, upperArrayIndex = 0;
    //For every index in the original array
    for (int sortedArrayIndex = 0; sortedArrayIndex &lt; sortedArray.Length; sortedArrayIndex++)
    {
      //If all the elements from lower subpart has been merged
      if (lowerArrayIndex == lowerArray.Length)
        //Add the element from upper subpart to the sorted array
        sortedArray[sortedArrayIndex] = upperArray[upperArrayIndex++];
      //If all the elements from upper subpart has been merged
      else if (upperArrayIndex == upperArray.Length)
        //Add the element from lower subpart to the sorted array
        sortedArray[sortedArrayIndex] = lowerArray[lowerArrayIndex++];
      //If first not merged element in lower subpart is smaller than first not merged element in upper subpart
      else if (lowerArray[lowerArrayIndex] &lt;= upperArray[upperArrayIndex])
        //Add the element from lower subpart to the sorted array
        sortedArray[sortedArrayIndex] = lowerArray[lowerArrayIndex++];
      //If first not merged element in upper subpart is smaller than first not merged element in lower subpart
      else
      {
        //Add the element from upper subpart to the sorted array
        sortedArray[sortedArrayIndex] = upperArray[upperArrayIndex++];
        //And count the inversions
        inversionsCount += lowerArray.Length - lowerArrayIndex;
      }
    }
  }

  //Return sorted array (part)
  return sortedArray;
}

This should improve the performance, so let's see the results.

Mergesort without array copying

The improvement for big data sets is there. If it comes to smaller input arrays, there is no change. This can lead to the conclusion that in cases of smaller data sets the operation of copying arrays doesn't have much influence on overall execution time (but the space complexity is better with this implementation so it is worth the effort).

Probably for most of you what I have described above is nothing new. For a person like me (who doesn't have many chances to play with algorithms like this) it was fun and educating experience. This is why I decided to write it down - maybe (just maybe) somebody else will find it useful or interesting.

I've been using jqGrid for a quite a long time now. I've written some posts about it on this blog and answered a lot of questions on Stack Overflow. I've also created an open source project which provides strongly typed helper for easier use of jqGrid in ASP.NET MVC. So when I was approached by Packt Publishing with proposition of reading and reviewing the Instant jqGrid by Gabriel Manricks I simply couldn't say no.

Instant jqGrid Cover The Packt Publishing Instant series goal is to provide starter books for various technologies which take reader step by step from understanding what that technology is to being able to work with its most important features - Instant jqGrid follows this convention to the letter. If you are already familiar with jqGrid and know how to use its core functionalities you can stop reading right now - this book is not for you. The target audience for this book are people with no earlier jqGrid experience.

The books takes off by describing what jqGrid is and how likely one is going to need it. After brief presentation of the plugin author prepares the ground for some real action by describing the installation process. When all required files are set in place the reader will see how to create the simplest grid possible. From this point the grid will be constantly extended in order to show the most commonly used features:

  • The built-in and custom formatters
  • The pager and the navigator
  • Working with remote data sources
  • The form editing
  • The searching
  • Working with methods and events exposed by jqGrid API

Last section of the book contains directions for readers who would like to dive into more advanced aspects of jqGrid: official websites, articles, tutorial and places where you are most likely to get an answer when you are stuck.

Instant jqGrid is first book about the jqGrid and it is very good that it is targeted at jqGrid newcomers as currently available learning materials are more like raw documentation for advanced users than beginners guide. The author did a great job, book is well written, logically structured and delivers on the promise of the series - going through it feels simply and natural. If you want to start your adventure with jqGrid this book will provide a well-organized how-to and when you will be ready to go beyond its scope you will know where to look.

I have released version 6.2.0 of Lib.Web.Mvc couple days ago and it is available on CodePlex or via NuGet:

PM> Install-Package Lib.Web.Mvc

This release brings some updates to jqGrid strongly typed helper (you can check full list here) and I would like to describe closer one of them - jQuery UI integration features:

  • JqGridColumnEditTypes.JQueryUIAutocomplete
  • JqGridColumnSearchTypes.JQueryUIAutocomplete
  • JqGridColumnEditTypes.JQueryUISpinner
  • JqGridColumnSearchTypes.JQueryUISpinner
  • JqGridColumnEditTypes.JQueryUIDatepicker
  • JqGridColumnSearchTypes.JQueryUIDatepicker
  • JqGridColumnPredefinedFormatters.JQueryUIButton

JQueryUIAutocomplete (JqGridColumnEditTypes/JqGridColumnSearchTypes)

This is a custom search and edit type which allows easy configuration of jQuery UI Autocomplete widget for the column:

JqGridColumnSearchTypes.JQueryUIAutocomplete

Widget options are available through JqGridColumnSearchable and JqGridColumnEditable attributes properties:

  • AutoFocus
  • Delay
  • MinLength

The action and controller from which the suggestions will be requested can be set through attributes constructors, while all the events can be bound in standard way - by overriding the DataEvents property.

JQueryUISpinner (JqGridColumnEditTypes/JqGridColumnSearchTypes)

This is a custom search and edit type which allows easy configuration of jQuery UI Spinner widget for the column:

JqGridColumnEditTypes.JQueryUISpinner

Widget options are also available through JqGridColumnSearchable and JqGridColumnEditable attributes properties:

  • Culture
  • SpinnerDownIcon
  • SpinnerUpIcon
  • Incremental
  • Max
  • Min
  • NumberFormat
  • Page
  • Step

JQueryUIDatepicker (JqGridColumnEditTypes/JqGridColumnSearchTypes)

This is a custom search and edit type which allows easy configuration of jQuery UI Datepicker widget for the column:

JqGridColumnSearchTypes.JQueryUIDatepicker

Following properties of JqGridColumnSearchable and JqGridColumnEditable attributes can be used to configure the widget behaviour:

  • AppendText
  • AutoSize
  • ChangeMonth
  • ChangeYear
  • ConstrainInput
  • DateFormat
  • FirstDay
  • GotoCurrent
  • MaxDate
  • MinDate
  • NumberOfMonths
  • SelectOtherMonths
  • ShortYearCutoff
  • ShowCurrentAtPos
  • ShowMonthAfterYear
  • ShowOtherMonths
  • ShowWeek
  • StepMonths
  • YearRange
  • YearSuffix

The widget will take the localization settings (dayNames, dayNamesMin, dayNamesShort, monthNames and monthNamesShort) from jqGrid language settings.

JQueryUIButton (JqGridColumnPredefinedFormatters)

This is a custom formatter which allows easy configuration of jQuery UI Button widget for the column:

JqGridColumnPredefinedFormatters.JQueryUIButton

Widget options are available through JqGridColumnFormatter attribute properties:

  • Label
  • PrimaryIcon
  • SecondaryIcon
  • Text
  • OnClick

The cell value can be accessed in function passed to OnClick via data-cell-value attribute of sender (for example like this: $(this).attr('data-cell-value');).

In couple next days I'm going to update the sample project in order to show some of this functionality.

Older Posts