Skip to main content

Tech·Ed Africa 2010: Want to go for free?

Win 1 of 4 Tickets to Microsoft Tech·Ed Africa 2010 worth R6 150 each!!!

Venue:    ICC Durban

Date:       17th - 20th October 2010

All you need to do is take a photo of a Microsoft Tag in a really cool/funny/practical place and upload the picture to the Facebook competition page.

More details in the competition animated video here.

Full Competition Rules on our Facebook page.

Dates for Submissions & Announcements of Winners:

  • 25 Aug 2010    -    Last Date For Submissions (week 1) (5pm)
  • 27 Aug 2010    -    Week 1 Winner Announced
  • 01 Sep 2010    -    Last Date For Submissions (week 2) (5pm)
  • 03 Sep 2010    -    Week 2 Winner Announced
  • 08 Sep 2010    -    Last Date For Submissions (week 3) (5pm)
  • 10 Sep 2010    -    Week 3 Winner Announced
  • 15 Sep 2010    -    Last Date For Submissions (week 4) (5pm)
  • 17 Sep 2010    -    Week 4 Winner Announced

Submissions & Announcements of Winners:

  • A new winner will be selected weekly.
  • Last date for submissions for a particular week is 5pm Wednesday of that week.
  • Winner for that week will be announced on the Friday.
  • Submissions after 5pm will count towards the following week.
  • Submissions which did not win in a previous week will still be considered in following weeks and need not be re-submitted.
  • A person can only win once, thereafter all his other submissions will be ignored.
  • You cannot submit on behalf of another person.
  • Submissions are done by posting a photo to the Facebook page wall.

Terms and Conditions apply:

This competition is limited to Tech·Ed Africa 2010 entrance and does not include Travel, hotel or any other expenses. You will be required to help out at the Developers Community Lounge at Tech·Ed Africa 2010 for 3 hours a day if you do win. For Full list of rules please consult the Facebook page.

Pulled Apart - Part VII: PLINQ, not as easy as first assumed

onebit_26

Note: This is part of a series, you can find the rest of the parts in the series index.

PLINQ, which is Parallel LINQ or the ability to run LINQ queries with parallel extensions in .NET 4. The idea is that you take a simple LINQ query and append .AsParallel on the end and it is magically parallel – as in my insane solution to Fizz Buzz below:

var result = from i in Enumerable.Range(0, 1000).AsParallel()
             where (i % 3 == 0 || i % 5 == 0)
             select new { value = i, answer = i % 3 == 0 ? i % 5 == 0 ? "Fizz Buzz" : "Buzz" : "Fizz" };

foreach (var item in result)
{
    Console.WriteLine("{0} gets a {1}", item.value, item.answer);
}

If you have been to one of my what’s new in .NET 4 talks you would’ve even seen me demo it this way, and for that I am VERY VERY SORRY – because I was wrong wrong wrong. Sad smile

In Pull I used this exact mistake above to get the updating of podcasts to run in parallel and it wasn’t until I implemented some status view that I noticed it wasn’t actually in parallel (two weeks and 46 check-ins before I realised this).

The problem is that appending .AsParallel does nothing but some setup. To make use of parallel-ness you must use the .ForAll extension as in the example below (note the difference is in processing, note the query change on line 5):

var result = from i in Enumerable.Range(0, 30).AsParallel()
             where (i % 3 == 0 || i % 5 == 0)
             select new { value = i, answer = i % 3 == 0 ? i % 5 == 0 ? "Fizz Buzz" : "Buzz" : "Fizz" };

result.ForAll(item =>
{
    Console.WriteLine("{0} gets a {1}", item.value, item.answer);
});

Now Pull works all in parallel and I am happy to move on, right? WRONG again. In my research I found a white paper written by Pamela Vagata from the Parallel Computing Platform Group at Microsoft which covers when to use PLINQ and when to use Parallel.ForEach. This paper is fantastic and highlights that these are not equal and that you should use the right tool for the job. My quick reference table based on that white paper (the smile-y indicates what you should use):

Action PLINQ Parallel.ForEach
Simple Data-Parallel Operation with Independent Actions   Smile
Ordered Data-Parallel Operation Smile  
Streaming Data-Parallel Operation Smile  
Operating over Two Collections Smile  
Thread-Local State   Smile
Exiting from Operations   Smile

If you do not know what those actions mean then you must grab the white paper. As you can see for me I should never have used PLINQ because I am doing a Simple Data-Parallel Operation with Independent Actions. Why is PLINQ wrong, well the white paper explains:

While PLINQ has the ForAll operator, it may be easier to think in terms of parallel loops rather than parallel
queries for this type of scenario. Furthermore, PLINQ may be too heavyweight for a simple independent action.
With Parallel.ForEach, you can specify ParallelOptions.MaxDegreeOfParallelism, which specifies
that at most N threads are required. Thus, if the ThreadPool’s resources are scarce, even if the number of available
threads is less than N, those threads will begin assisting the execution of Parallel.ForEach. As more threads
become available, those resources will then be used for execution of the loop’s body delegate as well. However,
PLINQ requires exactly N threads, which is specified by using the WithDegreeOfParallelism() extension
method. In other words, for PLINQ N represents the number of threads which are actively involved in the PLINQ
query.

Final Thoughts

.NET 4 has made doing parallel very each, in fact it is too easy to do the wrong thing and still have it work. Spending time researching the right method is vital for software development, don’t just assume.

New posters added to www.newdrp.com

Stolen directly from Zayd Kara’s blog, which is a fantastic read. If you not reading his blog, you should.

Our team (editor: our = ATC team at BB&D) have been hard at work and are pleased to announce that following posters have been added to the DRP Site for FREE download. Here is a quick brief on the posters that have been added:

DirectCompute

Direct Compute – “Microsoft DirectCompute is an application programming interface (API) that supports general-purpose computing on graphics processing units on Microsoft Windows Vista and Windows 7. DirectCompute is part of the Microsoft DirectX collection of APIs and was initially released with the DirectX 11 API but runs on both DirectX 10 and DirectX 11 graphics processing units”

Download and details.

WCF Error Handling Best Practices

WCF Error Handling Best Practices – WCF error handling is complicated by the fact that client and service are separated by physical boundaries. This quick reference poster provides you with best practices for handling WCF errors.

Download and details.

Windows Embedded Standard 2009

Windows Embedded Standard 2009 – “Windows Embedded is family of operating systems from  Microsoft designed for use in embedded systems.” This quick reference poster describes the development cycle of creating a Windows Embedded image.

Download and details.

Windows XP Mode

Windows XP Mode & Windows Virtual PC - Windows XP Mode and Windows Virtual PC, available on Windows 7  allows you to run multiple Windows environments, such as Windows XP Mode, from your Windows 7 desktop. This quick reference poster provides you with a quick overview helping you decide if Windows XP Mode and Windows Virtual PC is for you.

Download and details.

It's Dev4Dev's time again!

My favourite gathering of developers happens 2 or 3 times a year, it’s called Dev4Devs. This is a free event which Microsoft runs, where ANYONE can present a topic but they only have 20min! This means that in a morning you see 7 topics and rather than getting swamped in all the details you dive directly to the really important parts.

The topic list is below, and there is some exciting topics there and even some non-MS technology is covered too!

I am also really glad that the entire ATC team at BB&D, which is the team I work in, is presenting – they are highlighted in the list below!

The next one comes on the 4th September 2010 and it occurs at Microsoft’s offices in Johannesburg and you can register at https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032456806&Culture=en-ZA

Session List for Dev4Devs

Windows Phone 7 - Getting Started

A high level introduction to getting started with Windows Phone 7 development including: where to begin, options for developers, thinking about design and a demonstration application.

Presenter: Johannes van Schalkwyk

Making MVVM easy

Starting with WPF, Silverlight or WP7? Heard of MVVM but think it’s WAY too complex for your simple application? Join me for a crash course using the easiest MVVM framework available… Caliburn.Micro!
Presenter: Rudi Grobler (ATC Team Member)

Why you should care about Google Closure

Closure is a modularised set of JavaScript libraries that can assist you in building rich internet applications.

It's been battle-tested by Google on sites like: Gmail, Google Docs and Google Maps.

Attend this short intro to get an understanding of how important these libraries are and why you should consider using them in your next big internet app.

Presenter: Simon Stewart

Introducing NHibernate 3

The daddy of .NET ORM is back with a new release, in this session you'll see a few of the newest features - such as a full IQueryable LINQ provider - that makes NHibernate 3 the best release yet!
Presenter: Kevin McKelvin

Branding SharePoint 2010 with MasterPages, Layouts and CSS

One of the largest limitations of WSS3.0 and MOSS2007 is the ability to brand SharePoint without intricate knowledge of the platform and in some cases breaking a few rules and modifying out of the box system files to get the desired look and feel. Come and see how the theming engine in SharePoint 2010 together with CSS, Master Pages and Layouts can be used to brand your SharePoint site using the amazing new SharePoint Designer 2010.
Presenter: Brent Samodien

Unit Testing - Code Coverage & Mocking

In this presentation William will demonstrate how code coverage tools help measure the effectiveness of your unit tests.  He will also show how Mocking tools can help to add value to your unit tests and ensure that all edge-case logic is properly checked.
Presenter: William Brander (ATC Team Member)

Getting ready for Windows Azure development

Heard about the cloud? Excited about the possibilities? In this session we have a 1000-mile introduction to Microsoft’s operating system for the cloud, Windows Azure, how it compares to the other cloud offerings that are out there and how to get your hands dirty with the skill-up process. Endless possibilities + new tech = fun stuff.
Presenter: Ryno Rijnsburger

An introduction to Mercurial Source Control

Want a quick introduction into a Distributed Version Control System (DVCS)? Meet Mercurial it is a cross-platform, fast, lightweight source control management system designed for easy and efficient handling of distributed projects.
Presenter: Zayd Kara (ATC Team Member)

Making money with Coded UI

Coded UI is a brand new feature of Visual Studio 2010 which enables you to quickly build automated user interface tests for your application and run them as if they were unit tests. In the talk we will look at how Coded UI can change your life, one UI at a time!
Presenter: ME! (ATC Team Member)

Hack .Net in 10 Seconds - Why obfuscation is critical

Hacking 101 – I demonstrate how to bypass basic copy protection in an unobfusctaed .Net application through reverse engineering and show how obfuscation adds a layer of protection. I also demonstrate additional techniques for protecting your applications from hacking once they are released in the wild.
Presenter: Mark Pearl

Composite Applications with PRISM

In this session Stephan will demonstrate how to leverage the Composite Application Libraries to create modularized applications for WPF and Silverlight. He will also show you how to do multi-targeted development by sharing lots of code between the web and desktop applications.
Presenter: Stephan Johnson

An Introduction to Pex and Moles

An introduction into Pex and Moles, covering the basics of Mole Types and Mole Stubs and Parameterised Testing.
Presenter: Dave Russell

ASP.NET Dynamic Data

I will briefly introduce ASP.NET Dynamic Data by showing how to build a complete data maintenance web application with almost zero code.

Moving on, I will demonstrate some standard ways of customising a Dynamic Data application, and some more advanced non-standard customisation techniques. I will finish off by illustrating how Dynamic Data libraries and controls can be leveraged in other applications that don't normally use dynamic data.

Presenter: Brady Kelly

ASP.NET MVC 3

As you probably already surmised, ASP.NET MVC 3 is the next major release of ASP.NET MVC. Join us as we highlight the upcoming features and modifications to this popular framework.
Presenters: Jaco Pretorius and Kobus Brummer

Visual Studio Mobile Site

imageDid you know that there is a mobile version of the Visual Studio website? I stumbled across it recently and it is fantastic resource to have on your phone.

One of the really great parts is the Crack the Code game, which gives you four snippets of code (2x C#, 1x F#, 1x VB.NET) and asks you to figure out how many errors there are. If you work out correctly you get access to wallpapers for your mobile device and if you get all four correct you get a limited edition Visual Studio 2010 Window 7 theme!

To access the site go to http://mobile.microsoft.com/visualstudio (note the mobile in the URL) on your mobile device!

Pulled Apart - Part VI: A simple download manager

onebit_26

Note: This is part of a series, you can find the rest of the parts in the series index.

A podcatcher, like Pull, really is just a RSS reader with a download manager built in to download RSS enclosures. This means that both parts should work and work very well. The RSS reader side is fairly easy to do, however the downloader is anything but easy.

WebClient

Initially I thought about a simple download manager would be enough and I built it around a .NET class called WebClient. This has a nice async method (DownloadFileAsync) and a few events you can subscribe to which does a good job at downloading a file easily.

WebClient webclient = new WebClient();
webclient.DownloadFileCompleted += new System.ComponentModel.AsyncCompletedEventHandler(webclient_DownloadFileCompleted);
webclient.DownloadProgressChanged += new DownloadProgressChangedEventHandler(webclient_DownloadProgressChanged);
webclient.DownloadFileAsync(episode.EpisodeUri, this.Episode.Local_Path);

However in exchange for this nice simple implementation, you lose a ton of features that a more powerful download manager may have, in particular ways of dealing with errors. For me, living in South Africa where the bandwidth isn’t great, dealing with errors during downloads is essential and so I eventually dropped WebClient as the system I used.

HttpWebRequest

.NET also includes a fully featured HTTP request/response system built around HttpWebRequest and HttpWebResponse. These classes offer a ton of features which WebClient doesn’t. However using it much more code than WebClient. Below is the code I use and some interesting bits in are:

  • Lines 2 & 3: Setting a connection group name and setting UnsafeAuthenticationConnectionSharing to true means that the system will attempt to reuse an existing pipe to the server. This can have some great performance improvements when hand shaking is done.
  • Line 4: Setting the user agent means that various stats programs can identify your client specifically.
  • Line 8: Changing the range means that I can start at a different point in the file. This enables me to resume downloads which are broken because of errors or closing the application. An important aspect I learnt is that some web servers hate the start point to be set to 0 (which means the beginning), which is why I do a check.
  • Line 15: This gets the response from the server, but not the data.
  • Line 25: This is where we actually start to get the data.
  • Line 37: This is where we write the data to disk.
HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create(uri);
webRequest.ConnectionGroupName = "Pull (pull.codeplex.com)";
webRequest.UnsafeAuthenticatedConnectionSharing = true;
webRequest.UserAgent = "Pull (pull.codeplex.com)";

if (startPointInt > 0)
{
    webRequest.AddRange(startPointInt);
}

webRequest.Credentials = CredentialCache.DefaultCredentials;
HttpWebResponse webResponse = null;
try
{
    webResponse = (HttpWebResponse)webRequest.GetResponse();
}
catch (WebException err)
{
    // Error handling

    return;
}

Int64 fileSize = webResponse.ContentLength;
webFileStream = webResponse.GetResponseStream();
if (!fileInfo.Exists)
{
    localFileStream = new FileStream(fileInfo.FullName, FileMode.Create, FileAccess.Write, FileShare.None);
}
else
{
    localFileStream = new FileStream(fileInfo.FullName, FileMode.Append, FileAccess.Write, FileShare.None);
}

int bytesSize = 0;
byte[] downBuffer = new byte[2048];
while ((bytesSize = webFileStream.Read(downBuffer, 0, downBuffer.Length)) > 0)
{
    localFileStream.Write(downBuffer, 0, bytesSize);
    onUpdate(Convert.ToInt32((localFileStream.Length * 100) / (fileSize + startPointInt)));
}

As you can see it is more complex, but you can do so much more, like resuming downloads.

Final Thoughts

I want to point out an article by Andrew Pociu who wrote a fantastic article on building a download manager which inspired a lot of my code.

In the end changing to HttpWebRequest was a much better idea, but this is an area where I am still finding new types of errors and trying to put in error handling to cope with the WIDE variety of errors you can get on the web.

Pulled Apart - Part V: You are a DB server with SQLite

onebit_26Note: This is part of a series, you can find the rest of the parts in the series index.

One of the design decisions for Pull, is that it should just run without the need for an install. This requirement means that everything it needs to run should be available all the time and this brought an interesting problem to my design. The issue was that I needed a database to store all the info Pull uses (podcasts, episodes etc…) but I couldn’t require people to install SQL Server Express or PostgreSQL.

The solution was to use a file based database called SQLite, which requires just a few dll files to provide all the database functionality I would expect to my application to use but without the need for a database server. As this is a .NET application I used one of the wrappers for .NET called System.Data.SQLite. Usage with System.Data.SQLite can be via ADO.NET like code or via the Entity Framework.

I initially used EF for my development, but I found a mass of issues due to the assumptions that EF makes about when to open/close connections. These assumptions make sense when I have a DB server, but when I have a file which will be called from multiple threads those assumptions come back to bite and caused me no end of pain.

What I ended up doing to solve it was to build my own ORM which made use of reflection to handle mapping to classes, ala EF, but used a static instance and was very eager to close the connection to the file. These changes solved major issues with stability caused by EF while still providing a decent developer experience.

Mapping

Mapping of my classes to the database is done using a simple attribute which has a name and primary key indicator. For example the log class looks like the following. Note that I use the same attribute for columns and tables as I assume Class=Table and Property=Column. In the following all the properties names match the DB column names, but this is not a requirement.

[DataStore(Name = "Log")]
internal class Log
{
    [DataStore(Name = "PK", PrimaryKey = true)]
    public Guid PK { get; set; }
    
    [DataStore(Name = "Source")]
    public string Source { get; set; }
    
    [DataStore(Name = "Occured")]
    public DateTime Occured { get; set; }
    
    [DataStore(Name = "StackTrace")]
    public string StackTrace { get; set; }

    [DataStore(Name = "Message")]
    public string Message { get; set; }
}

Using that information, I can take a object and using reflection build up the SQL needed. An example of this is how I create the update command:

private static SQLiteCommand ConvertToUpdateCommand<T>(T item)
{
    SQLiteCommand command = new SQLiteCommand();
    string insertCommandText = string.Format(CultureInfo.CurrentCulture, "UPDATE [{0}] SET ", ((DataStoreAttribute)typeof(T).GetCustomAttributes(typeof(DataStoreAttribute), false)[0]).Name);
    int parameterCounter = 0;

    object PKValue = null;
    string PKColumn = string.Empty;

    GetAttributedProperties(typeof(T), (property, attribute) =>
    {
        if (!attribute.PrimaryKey)
        {
            insertCommandText += string.Format(CultureInfo.CurrentCulture, "[{0}]=@A{1}, ", attribute.Name, parameterCounter);
            command.Parameters.AddWithValue(string.Format(CultureInfo.CurrentCulture, "A{0}", parameterCounter), property.GetValue(item, null));
            parameterCounter++;
        }
        else
        {
            PKValue = property.GetValue(item, null);
            PKColumn = attribute.Name;
        }
    });

    insertCommandText = insertCommandText.Remove(insertCommandText.Length - 2);
    insertCommandText += string.Format(CultureInfo.CurrentCulture, " WHERE [{0}]=@PK", PKColumn);
    command.Parameters.AddWithValue("PK", PKValue);
    command.CommandText = insertCommandText;

    return command;
}

Final Thoughts

I have been very happy with SQLite as a database, but the learning curve has been tough for someone coming from a database server history. Once I understood the limitations and differences to DB servers it has been a great experience with SQLite.

I’ve been very happy with my ORM as it has made the development very easy working with proper objects and really hiding all the DB functionality away. There are some performance issues in the ORM which I have not solved and I am not feeling a major rush to solve either because the sizes of data in Pull are fairly light to start with. Examples of those performance issues is

  • I have no way to batch commands in one transaction. So for inserting podcast episodes, for each episode I create a new command, a new transaction, open the DB, run command, close DB and transaction. What I should have is a way to create a bunch of commands and then run them all using one transaction and one open/close of the DB.
  • Updating updates every field in the object, regardless if it has changed. This is because I have no object tracking implemented to figure this out.

Despite these the system still runs really well and overall I am happy with the choice of DB.

Pulled Apart - Part IV: Talking to myself, using memory mapped files for communication

onebit_26

Note: This is part of a series, you can find the rest of the parts in the series index.

One of the special features of Pull is the ability to have deal with special protocol handlers for podcasts, for example if you click the iTunes Podcast link (itpc://) or Zune Podcast (zune://) then it should add the podcast to Pull. The way it works in Windows is you register an executable with a protocol. Then when a user clicks a link with a protocol it launches the associated executable passing the URL as the arguments. It will launch a new instance of the executable even if an instance of that executable is already running.

This means when someone clicks a link, you could end up with a scenario like the following where the program is running two instances and the second instance to launch has the new feed info while the original instance doesn’t.

image

This is not an ideal scenario and to solve it meant solving two problems:

  1. Check if an application is already running.
  2. If there is an application running, tell the existing application to process the new feed.

Solving problem 1: Check if an application is already running

Checking if an application is already running is a well known problem and is solved using a mutex, which I put in the Main method of the application. The key parts of that in the code below are:

  • Line 1 – Figure out a unique name for the mutex. It needs to be unique across the whole PC. For my use I use the executable path and strip out any symbols. This allows me to run multiple instances based on different paths.
  • Line 5 – OpenExisting tries to open the mutex with the name. If there is no mutex in use then an exception is generated. If it succeeds then you know the application is already running.
  • Line 19 – Creating the mutex.
  • Line 20 – Wrap everything in a try…finally so that the mutex is always cleaned up.
  • Line 40 – Clean up the mutex.
string mutexName = Regex.Replace(Application.ExecutablePath, @"\W*", string.Empty);

try
{
    Mutex.OpenExisting(mutexName);
    // we found that mutex, is any parameters exist pass them on and go on with life.
    if (arguments.Length > 0)
    {
        IPMF.SendMessageToServer(mutexName, arguments[0]);
    }

    return;
}
catch (WaitHandleCannotBeOpenedException)
{
    // no mutex with that name exists already... excellant (in your best mr burns voice please) 
}

Mutex mut = new Mutex(true, mutexName);
try
{
    Application.EnableVisualStyles();
    Application.SetCompatibleTextRenderingDefault(false);
    Database.ConnectionString = "data source=pull.sqllite";
    MainForm launchForm = new MainForm();

    if (Properties.Settings.Default.FirstRun)
    {
        Application.Run(new SettingsForm());
        Properties.Settings.Default.FirstRun = false;
    }

    launchForm.StartupArguments = arguments;
    using (IPMF ipmf = new IPMF(mutexName, message => { Bus.GetBus().Broadcast(DataAction.ParseFeed, message); }))
    {
        Application.Run(launchForm);
    }
}
finally
{
    mut.ReleaseMutex();
}

Solving problem 2: Communication with the existing instance

Now we know if an instance of the application is running or not, we need to solve the second issue which is communication with the existing application. There is a ton of ways to do this

  • The first process monitors a special folder and the second process writes to that folder.
  • Use named pipes to communicate.
  • Write to a special table in the database and have the other process pick up there.

All those are good, but as part of my learning I really wanted to use a new .NET feature called a Memory mapped file. This is a file which exists in memory and can optionally exist on disk too. While in memory, if you know the name you can access it from any process which means it’s not limited to the process which creates it.

Memory mapped files, when in memory only mode, also significantly out perform any other options above and offer a variety of powerful features. Memory mapped files when backed by disk also give powerful features - for example you can take a 10Gb file on disk and load only the part from 2.5Gb to 3.5Gb in memory – this enables you to work only with the part you need. For me though I just need a tiny (< 100Kb) amount of memory and it never needs to be persisted to disk because I am using it for communication between processes so I just used the in memory only option.

image

The process itself becomes very simple

  1. The second process writes the arguments to the memory mapped file.
  2. The first process reads the arguments from the memory mapped file and acts upon it.
  3. The first process clears the memory mapped file.

To handle this I wrapped everything in a class called IPMF, which stands for inter-process messaging framework, sounds professional doesn’t it Winking smile. Creating the memory mapped file is very easy using MemoryMappedFile.CreateNew method which takes a name, max size for the file and then various options to control read/write, how to allocate pages etc…

serverMemoryMappedFile = MemoryMappedFile.CreateNew(instanceName, maxSize, MemoryMappedFileAccess.ReadWrite, MemoryMappedFileOptions.DelayAllocatePages, null, HandleInheritability.None);

The one thing not documented which cause me to go grey is that the name of the memory mapped file needs to be unique not only between other memory mapped files, but also between mutex’s on the machine as the memory mapped file uses a mutex internally.

Reading is very easy: you first create a view which returns a stream which you can read with a binary reader and parse to a string or any other data format.

using (MemoryMappedViewStream stream = serverMemoryMappedFile.CreateViewStream())
{
  using (BinaryReader reader = new BinaryReader(stream))
  {
    string data = reader.ReadString();

Finally writing to a memory mapped file is also easy and basically the same as creating & reading.

  1. Open the memory mapped file using the OpenExisting method.
  2. Create a view to get the data stream.
  3. Write to the stream using a BinaryWriter

 

public static void SendMessageToServer(string instanceName, string messsage)
{
    instanceName += ".memoryMappedFile";
    using (MemoryMappedFile clientMemoryMappedFile = MemoryMappedFile.OpenExisting(instanceName, MemoryMappedFileRights.Write, HandleInheritability.None))
    {
        using (MemoryMappedViewStream stream = clientMemoryMappedFile.CreateViewStream())
        {
            BinaryWriter writer = new BinaryWriter(stream);
            writer.Write(messsage);
        }
    }
}

 

So all I do is in the first instance I continually check the data when I read it, if the data is not empty then I use the bus to tell my application to process that new podcast URL.

Final Thoughts

Memory mapped files are exceptionally easy to use and opens a lot of doors for solving problems better, from loading parts of files to inter-process communication and definitely something worth investing time into.

Pulled Apart - Part III: Get on the bus!

onebit_26Note: This is part of a series, you can find the rest of the parts in the series index.

One of the aspects of Pull is that it had to be multi-threaded, because things like downloading a massive podcast shouldn’t lock up the UI. Threading has become pretty easy in .NET 4 thanks to things like PLINQ or Parallel Extensions. However cross thread communication hasn’t gotten easier in .NET 4.

My idea to solve this was to create a internal bus – which is just an implementation of the pub/sub pattern. A bunch of subscribers register with the bus for a specific message type and when a message is given to the bus, it passes it to the correct subscribers.

image 

First thing I did was create a simple singleton instance of my bus class:

internal class Bus
{
    private static Bus bus = new Bus();

    private Bus()
    {
        publisher = new Publisher();
    }

    public static Bus GetBus()
    {
        return bus;
    }        

This ensures that all threads get the same instance. Inside my bus class I implement the new .NET 4 IObserver/IObservable interfaces which gives me all the pub/sub magic. This is all internal to the bus class so that usage in my application is just with bus. For example the methods for registering a subscriber is, which hides the pub/sub concept completely.

public void Register<T>(DataAction actions, Action<T> method, Control control = null)
{
    publisher.Subscribe(new Subscriber<T>(actions, method, control));
}

public void Register(DataAction actions, Action method, Control control = null)
{
    Action<object> fakeMethod = value => { method(); };
    this.Register(actions, fakeMethod, control);
}

One of the options when registering a subscriber is you can is pass in a control, which I use for handling objects owned by other threads which makes it very easy to update the UI.

Broadcasting to all subscribers who have registered for a message type is handled by a very simple method:

public void Broadcast<T>(DataAction action, T data)
{
    publisher.Update(action, data);
}

public void Broadcast(DataAction action)
{
    this.Broadcast<object>(action, null);
}

To identify the type of message I am using an enum, which I do not feel too great about. The advantage of using enum’s is that there is no magic strings which the compiler can’t identify (i.e. if I mistype a message, the compiler tells me) and that I can use flags to broadcast multiple messages at once. However the downside of enum’s means adding a new message means editing the list of enum’s which isn’t so great.

Final Thoughts

Overall I am exceptionally happy with the bus as it solved so many problems I have had with multi-threaded applications and I think should be a standard in application design in future.

Pulled Apart - Part II: What? You're not using TFS!

Note: This is part of a series, you can find the rest of the parts in the series index.

onebit_26Some people see learning as a side effect of software development and others believe that all must be known upfront and so no learning occurs. The reality is that it learning during software development is a core part and you should be embracing it. One of the the things I have done with Pull is to host the code with CodePlex. CodePlex offers two ways to store source code – you can use TFS/SVN or you can use Mercurial. 

In the past I have always used TFS because I am comfortable there – it’s a tool I know well and I like to use it. However to embrace the learning for Pull, I decided it could be a Mercurial project. Which brought me to my first issue, I didn’t have any Mercurial tools so I went off to find a set I liked.

Being that I didn’t want to install anything, because my machine is so light and fast, a lot of the packages out there were not an option (TortoiseHG, I’m looking at you). In the end I choose the Mercurial Cmd Portable from PortableApps.com which gave me a light option, but did mean no GUI which really isn’t a bad thing.

Comparing it to TFS is a two part compare.

  1. If I compare to the full GUI TFS inside Visual Studio then I prefer the Mercurial experience because working disconnected from the server and coming online is easier.
  2. There are TFS Power Tools which provides a command prompt tool called tfpt.exe and that has an online option which makes the whole experience just as good what Mercurial offers.

One of the big areas of difference between how TFS and Mercurial work is branching/merging/labels/forks etc… which I have not experienced yet.

Some of the things I did in my Mercurial setup which may help other people.

.hgignore

My ignore file for C# projects done in Visual Studio, with CodeRush installed.

syntax: glob

*.csproj.user
*/[Oo]bj/*
*/[Bb]in/*
*.suo
*DXCore.Solution
hgignore[.-]* 
[Tt]humbs.db 
*.user
*.bak.* 
*.bak 
*.[Cc]ache
*/[Pp]ublish/*
*.vssscc
*.vs10x

hgrc

My config was fairly straightforward, I have an auth setup so that my CodePlex details are remembered and I created an alias called codeplex. This lets me just type hg codeplex and it prompts for my password and that’s it – one line gets the repro pushed to CodePlex. I also setup WinMerge as the diff tool, because I am a WinMerge fan and this just lets me keep using it easily.

[auth]
codeplex.prefix=hg01.codeplex.com/pull
codeplex.username=rmaclean
codeplex.schemes=https

[alias]
codeplex=push https://hg01.codeplex.com/pull

[ui]
username=Robert MacLean <[email protected]>

[extensions]
; must uncomment this line
extdiff =

[extdiff]
cmd.winmerge = E:\PortableApps\WinMergePortable\App\WinMerge\WinMergeU.exe
opts.winmerge = /r /e /x /u /wl