Net Revision Tool

to developers

I work with a team of around ten or so other developers and we’re gradually moving our applications to Git. At this stage we still have what I’d called “legacy practices” and we mostly build and deploy our applications, services and sites manually.

We had a need to recently deploy quite a few services and applications as part of a roll out of a large project, as I began this work I was thinking how nice it would be to somehow tie in the deployed code to the Git commit or tag from which the code was built, enabling us to troubleshoot any bugs or problems more rapidly and accurately by helping a developer to build from the very same source a deployed troublesome app was built from.

Clearly what I needed was a way to associated the commit ID of the latest commit with one or more generated assemblies, ideally by setting some assembly attribute to the SHA-1 of the commit and possibly more, if only…

I assumed that this is probably something I’d need to think about and probably design and build myself, but it wouldn’t be an hours work – clearly.

So while sipping yet another coffee I did a web search for some of the terms that naturally come up for this question and after a short time I found a post on StackOverflow all about this subject and noticed mention of a utility named NetRevisionTool – I read the post with mounting interest and I forked and cloned the tool’s repo in order to explore it further.

It didn’t take me long to recognize that this a very simple yet powerful little utility, and after a short time I understood that making a simple and straightforward change to any Visual Studio project would give me what I sought and more!

Basically all one has to do is add a pre and post build event to the project and add a single attribute to the projects’s AssemblyInfo.cs file. The build events are merely invocations of the tool’s .exe with some basic command line arguments.

The prebuild event analyzes a mask string in the AssemblyInformationVersion attribute, this mask describes the content and format of the string to be generated by the tool, it then generates the required string and writes it into AssemblyInfo.cs (it backs up the file first).

Then the build itself runs and the generated string gets embedded into the asembly.

Finally the postbuild event runs which simply restores the backed up copy of AsssemblyInfo.cs ready for the next build whenever that might be. (For this reason its important to designate the postbuild event to run “Always”).

Options

The tool itself has command line options and the mask string also has a host of predefined values you can embed, here’s the string I settled on for the time being (I’ve colored the brcaes for easier readability):

{b:uymd-} {b:uhms:} -> {mname} -> {cname} -> {branch} -> {CHASH:8} {!:(Warning – There were uncommited changes present)}

This will cause the tool to generate a string that contains the build date and build time, the name of the computer on which the build executed, the name of the commiter, the name of the branch the commit was on at the time of the build, the first 8 chars of the SHA-1 and finally a warning message that will be appended if the repo at the time of the build had any uncommitted changes.

(I added support for {mname} myself and made this a pull request to the tool developer’s original repo, I also added a command line option /echo which will cause the generated string to be written to the Visual Studio output window when the build runs).

Naturally it’s preferable to always deploy code that doesn’t have the warning message, also we use Github and so after a pull-request merge the most recent commit is always a merge commit with author name “Github” so this too is ideally what one would see in the final generated message because we can be certain that the commit has indeed been merged and cannot ever disappear due to the developer rebasing as could be the case if the commit were only local or on their fork.

The question of where exactly to put the executable to that all developers can invoke it when their builds run obviously comes up and after a short discussion we decided to put the tool onto an existing shared network drive – not a robust solution perhaps but easily sufficient for our current day to day working practices.

Once I’d convinced myself this did what I needed and I settled on the options and so on, it takes me just a couple of minutes to add this to any Visual Studio project. For small teams or even lone developers, I cannot overstate how valuable this little tool is and I’m surprised it isn’t more well known to developers!

A Coroutine Library in C#

In this post I’m going to introduce an implementation of coroutines written in C#, the code I’m divulging is the result of my initial foraging into this unfamiliar (to me) concept so please bear this in mind if it appears a little preliminary.

The coroutine library provides a mechanism for two or more methods to transfer control to one another in such a way that when a method resumes execution, it does so at the position from which it previously passed control to another method.

The C# language includes support for writing iterator methods which use the yield return operation to return control to the caller in such a way that when re-invoked at a later time, execution resumes at the statement following the yield return. This mechanism is the basis for my implementation of coroutines.

The approach I’ve used is to define an abstract base class which provides a means to call user written iterator methods (henceforth referred to as coroutines) so that they can return the necessary information to enable another iterator method to be invoked. The base class thus does the invocation of the coroutines on our behalf and  masks some involved “plumbing” code which is necessary to make the coroutine mechanism easy to use.

Because coroutines are a set of mutually cooperating methods it is natural to design the coroutine library in such a way that we define the coroutines as methods which are all members of  a class. This class derives from an underlying base class that does the housekeeping necessary for tracking each coroutine’s state. The abstract base class is named Cooperative and all the user need do is derive a class from this and in that derived class implement a set of coroutines.

Here is an example of simple class that leverage Cooperative and defines two coroutines, this will help you see how coroutines actually look before we explore how the underlying implementation is coded:

public class KeyboardCoroutines : Cooperative
{
    private Queue<ConsoleKeyInfo> key_queue = new Queue<ConsoleKeyInfo>();

    public override void BeginProcessing(object Arg)
    {
        StartByActivating(ProduceFromKeyboard,Arg);
    }

    private IEnumerator<Activation> ProduceFromKeyboard(object Arg)
    {
        ConsoleKeyInfo info = Console.ReadKey(true);

        while (info.Key != ConsoleKey.Escape)
        {
            while (key_queue.Count < 10 && info.Key != ConsoleKey.Escape)
            {
                key_queue.Enqueue(info);
                info = Console.ReadKey(true);
            }
            
            if (info.Key == ConsoleKey.Escape)
                yield return Activate(ConsumeFromQueue,1);
            else
            {
                yield return Activate(ConsumeFromQueue, 2);
                key_queue.Enqueue(info);
            }

            Debug.WriteLine("ProduceFromKeyboard sees a result of: " + Result.ToString());
        }
    }

    private IEnumerator<Activation> ConsumeFromQueue(object Arg)
    {
        ConsoleKeyInfo key = key_queue.Dequeue();

        while (key.Key != ConsoleKey.Escape)
        {
            while (key_queue.Count > 0 && key.Key != ConsoleKey.Escape)
            {
                Console.Write(key.KeyChar);
                key = key_queue.Dequeue();
            }

            if (key.Key == ConsoleKey.Escape)
                yield return Activate(ProduceFromKeyboard,3);
            else
            {
                Console.Write(key.KeyChar);
                yield return Activate(ProduceFromKeyboard,4);
            }
            Debug.WriteLine("ConsumeFromQueue sees a result of: " + Result.ToString());
        }
    }
}

You’ll notice right away that a coroutine has a return type of IEnumerable<Activation> and a coroutine passes control to some other coroutine by executing a yield return expression which is a function call to Activate in the base class. The base class therefore enumerates a coroutine and uses the value returned by each iteration to select and call the enumerator associated with the next coroutine to execute. Each coroutine’s execution is temporarily suspended at the point it yields and resumes at the next statement when some other coroutine passes control back to it.

In my next post on this subject I’ll show you the base class implementation and explore some alternative ways to expose this coroutine mechanism.

 

 

 

 

Nested IEnumerables

I’ve been exploring the subject of coroutines recently and I’ll be writing more about this in a separate post in the near future. Designing an implementation of coroutines for C# requires taking full advantage of C#’s support for yield return and IEnumerable<T>.

A spinoff from this exploratory work has been a practical mechanism for supporting yield return for both sequence values and complete sequences – a capability sadly absent from the C# language.

Conceptually here’s some pseudo-code that conveys this idea:

        public IEnumerable<string> FirstSequence()
        {
            yield return "1";
            yield return "2";
            yield sequence SecondSequence();
            yield return "7";
            yield return "8";
        }

        public IEnumerable<string> SecondSequence()
        {
            yield return "3";
            yield return "4";
            yield return "5";
        }

The yield sequence keywords are of course fictitious but convey the requirement nicely – namely that when enumerating values from FirstSequence() the value present within SecondSequence() are automatically enumerated and returned – as if they’d been yielded directly from within FirstSequence().

The current C# language (Version 5) does not permit such constructs and one must code the following in order to get the desired effect:

        public IEnumerable<string> FirstSequence()
        {
            yield return "1";
            yield return "2";
            foreach (string V in SecondSequence())
               yield return V;
            yield return "7";
            yield return "8";
        }

        public IEnumerable<string> SecondSequence()
        {
            yield return "3";
            yield return "4";
            yield return "5";
        }

It seems – to me at least – that the implementation of yield return is unduly limited, mainly because there seems to be no significant reason why the C# compiler cannot transform: yield sequence S; into: foreach (var V in S) yield return V; since the latter is fully supported and provides the desired semantics and the transformation seems straightforward.

We can overcome this limitation and approach the elegance and simplicity of our imagined yield sequence by adopting a similar design to that adopted for implementing coroutines which I’ll discuss in a future post. Namely we create an object which manages the enumeration for us – a sort of enumeration proxy – this iterator object can then provide the processing required to make everything work. We can’t transform the code into another form (as the C# compiler does when it encounters the yield keyword) but we can invisibly enumerate embedded sequences by creating a stack of enumerators enabling us to suspend enumeration of one sequence and begin enumeration of the embedded sequence.

Once all elements have been enumerated from the embedded sequence we can pop the stack and resume enumeration using the previous enumerator thus continuing with the original sequence, this technique will be explored along with some real code in a future post.

 

Lexcial Analysis With F# – Part 5

I’ve begun to establish a reasonably sound design pattern for the lexical analyzer. Of course this isn’t intended to be an ideal solution to the general case of writing a tokenizer for any language, it does not support any kind of short hand for describing token structure for example. But it isn’t overly complex and at this stage supports some of the common tokens seen in C, C++ or C#.

Continue reading

A Superlean Inter-Thread Queue

Download the complete sample solution – scroll to bottom of post.

There are times when a design calls for an ability to pass information from one thread to another within an application. The Actor design pattern hinges upon such a capability as do other bespoke architectures in which dedicated threads play a central role. In an asynchronous design information passes between threads by queuing requests to a thread pool, the operating system internally schedules the processing of queued work by selecting some arbitrary thread within the thread pool and causing that thread to invoke a callback that you supply directly or indirectly like when using async/await.

Continue reading

Shakey The Robot

When I was starting out in my career I retained my interest in technology, electronics and so on and one day stumbled upon a truly fascinating book – a book I recommend to people even today because of its in-depth yet readable coverage of AI and robotics – the book is The Thinking Computer – Mind Inside Matter. This book is about AI and LISP and stuff one would expect but its very readable, not overly theoretical – it also has a fascinating chapter about a 1966 robot project setup at Stanford – the robot was named Shakey.

Continue reading

An Experimenter’s Robot Framework

When I was in my mid teens and still living in Liverpool I spent a lot of time reading technical books and magazines, electronics was my hobby but I hadn’t started my formal education in that field yet. One day I picked up a special Christmas issue of Electronics Today International (ETI) – the issue was entitled Electronics Tomorrow and it contained several fascinating articles that speculated on the future of electronics, with hints at predicting a perceived future.

Continue reading

Never say “never”

The increasing sophistication of modern processors, operating systems, programming languages and design patterns when combined with our unceasing determination to tackle increasingly ambitious problems has led to a greatly increased scope for dissatisfactory performance. Developers must be more vigilante than ever before if they want to satisfy their customer’s expectations with respect to speed, latency and costs.

Continue reading

Lexical Analysis With F# – Part 4

Confronting immutability

As I’ve been working on making the lexical analyzer (hereafter called “tokenizer”) more complete and simpler to understand, it’s begun to really dawn on me why immutability is so very important. I’ve begun to realize that by eliminating the traditional concept of assignment we are left only with function invocation.

Continue reading