Outlook 2007: POP3 and delayed email or how to avoid downloading RSS feeds too often.

Submitted by Robert MacLean on Mon, 09/15/2008 - 17:34

I have been spoilt for a long time by living in an Exchange environment, so when I recently had to use a POP3 environment (even if it was just temporarily), I felt like I had gone back 15 years. One of the reason it feels like I have gone from Vista to Windows 3.1, is that Exchange pushes the mail down (or at least that is how I appears to work – I’m no Outlook expert), so mail arrives instantly when someone sends it.
Unfortunately POP3 is pull based and it doesn't come down until Outlook checks for mail. The horrible part is that by default it is configured for only checking every 30 minutes :( That could mean if someone misses your check window (like they would know) you could wait almost forever for their mail. Thankfully you can change that, first go to Tools -> Options
Next go to Mail Setup and click the Send/Receive button.
By default you should have one group (called All Accounts) and below that there is an option to Schedule an automatic send/receive every x minutes. In the picture below you'll see it is set to 1 minutes, which really helps (close enough to instant that it doesn't matter).
However if you are like me then you also use Outlook for RSS feeds, and that change will mean you will now be downloading feeds every 1 min! You can fix that easily by splitting RSS and Emails check times
To do that click the Edit button and remove RSS from being included, then click OK. If you are a perfectionist (which you may gather I am from my picture below), you could also click Rename to understand what it is easier. Next click the New button and do not select to include Email, just RSS. Click OK and now you should have two Send/Receive groups. You can now click on RSS in the list and set a separate interval for how often it should check (once an hour is good).  Click Close, OK and you are done :)

Help files (CHM) don't work

Submitted by Robert MacLean on Tue, 07/08/2008 - 07:31
So in Vista and/or IE8 there seems to be an issue with CHM files which you have downloaded from the Internet, where they just do not load their content.

I stumbled across this with the HMC 4.5 documentation which is presented in, you guessed it, CHM help files. Just like when I found the MSDN Library Broken this is caused by a security feature trying to protect you. To resolve this you need to close the file, right click and select properties. On there in the bottom right hand corner is a nice little Unblock button which once clicked you can then use the file normally.

The Zen of Hosting: Part 11 - DNS

Submitted by Robert MacLean on Tue, 07/08/2008 - 07:30

The last of the hurdles to overcome for the deployment was the running of the DNS server. This is because we run on a private IP range internally and use ISA to match external IP's and ports to the services we want to publish (i.e. NAT). This basically allows us to lower the attack surface because we only let out what is needed and an also mix and match servers to the same IP (lowering our IP address usage).

This also means that we have not only DNS servers to allow the servers and staff internally to find the other servers and services but we also have to have external servers too to allow users on the big bad Internet to find them. There is so much duplication of work for this configuration deployment scenario as you are having to create records on a best case of two servers and worst case is four servers and configure them differently. This also means the area for mistake is increased considerably. The upside is that internal staff do not need to go out the LAN and back in via the net or even go through the external firewalls and that we an have different domain names internally and externally, which is great for testing and development and only publishing when needed.

What I do not understand is why the DNS server team at Microsoft can't take a leaf from MSCRM 4.0's IFD deployment and allow you to specify what the internal IP range is and allow you to set A/CNAME’s for both internal IP ranges and external IP ranges. So when an internal IP requests a resolution it gives the internal A/CNAME records and for non-Internal they get the external A/CNAME record. This is such a logical thing to do, that Bind has this feature for ages, so come on Microsoft steal another idea from Linux ;)

One of the design choices for the DNS structure is a concept of mine called IP address abstraction. The idea of DNS is to get us away from IP’s but the problem is that in normal DNS configurations you end up with loads of A records and the moment you need to change IP addresses you end up with spending days changing IP addresses through all the records. What IP address abstraction is that you take a core domain name, and create a single A record for each IP you have.


  • A
  • A

What you do then is everywhere else you use CNAMEs to those names, regardless of what the domain name.


The advantage is that if the IP’s change ever, you change them in one place and it reflects everywhere, yet the experience to the end user is the exact same as DNS has always been.

Great Hyper-V Post

Submitted by Robert MacLean on Fri, 07/04/2008 - 07:51
Great post I found on the blogs about 5 key things to know about for Hyper-V, thought it may be of use to someone following my previous post on Hyper-V. Definitely don’t agree with their number one point that for serious usage it should be run on Core, we because of the reasons in this post. Anyway you can read more at Top 5 things to know about Hyper-V

The Zen of Hosting: Part 10 - Windows 2008 Core

Submitted by Robert MacLean on Fri, 07/04/2008 - 07:49
Since we had Windows 2008 we just had to try out Core edition, which is the version of Windows where Microsoft promised everything would be command line based. I like to think of it, that if Vista stole the UI from Apple Mac, then Win2k8 tried to steal it from Linux...

So before I get into core, let me first state that Win2k8 is the best server OS Microsoft has ever released. It is amazing how well polished everything is, and the tools that are there are great. Does it compare to Linux servers, well in some places it kicks ass and others it doesn’t, but since Linux servers are the de facto for command line based systems if we compare the command line features then they have done a HORRIBLE job.

All that is actually happening is you are getting the normal command prompt in a Windows and they dropped Explorer.exe from being the shell. In fact explorer.exe does not even get installed, but a lot of our old favourites are there, such as Ctrl+Alt+Del still brings up the usual menu and task manager still works.

Actually Microsoft dropped so much the gain in RAM is impressive (our avg RAM usage normally is 750Mb but on core it is a mere 300Mb) and the attack surface and patch requirements shrinkage is great.

Getting back to as the shell, is likely the biggest single mistake of core.It’s not like Microsoft doesn’t have a great command line system, called Powershell which they could have used. In fact there is so little added to the command line that after this experience I went to a Win2k3 machine and was able to do most of this anyway, and it’s not hard to kill explorer.exe as the shell in Win2k3. One advantage doing this core mockup on 2k3 has, is that at least Internet Explorer is there for you to get online to get help, Win2k8 core has no decent help (just the same old crappy stuff).

Linux has man pages, Powershell has get-help, the console has... Thank the heavens that I was able to use my laptop to get on to the Internet. For example I had problems with the first two core boxes trying to run Hyper-V on them, it just gave all kinds of RPC issues. Turned out that although I had not set the DNS correctly using netsh, I had set it for Primary Only and not Both. What the difference is beyond me because using the Windows GUI to set network settings for the last 20 years obviously sets this correctly so why make it so much tougher.

Another interesting feature of core, which I never had to it my head with but learnt about when I attended Win2k8 IIS training that Microsoft ran and the trainer said that in Core you couldn't run ASP.NET for web sites, because Core doesn't have the .Net framework. This is because the .Net framework installer needs a GUI. I suspect this is the same reason why Powershell can't be used, being .Net based and all. But the part I don’t understand is that THERE IS A FRIGGING GUI! It's all around the command prompt Window!

My recommendation is avoid Core as the extra work doesn’t make up for the cost of a little bit of extra ram, rather spend less time on setting up the server, more time billing customers and buy the ram. Hopefully in Windows Server vNext gets it right.

The Zen of Hosting: Part 9 - Hyper-V

As I approach the end of this series I want to highlight some of the technology that the hosting machine is built on and some of the experiences I learnt with that. These last few posts are much shorter than the earlier ones but hopefully provide some quick bite size info.

So if you have looked at standard HMC then add all the technology we have added to it, you would assume there is a building full of servers. The reality is the server room has got lots of space and isn’t that big. How did we achieve this? Slow applications because we running everything on a fewer servers? Not at all.

We bought some seriously powerful HP machines loaded a ton of ram and installed Windows 2008; but how does that help with running lots of systems and doesn't HMC break if it runs on Win2k8 (see way back to part 2)? Well Win2k8 has the best virtualisation technology Microsoft has ever developed, named Hyper-V. This is seriously cool stuff in that it actually runs prior to Windows starting and virtualises Windows completely (rather than running virtual machines on an OS, they run next to it). The performance compared to Virtual Server is not even worth talking about, it basically pushes Virtual Server into the stone age.

It is very fast and it seems to handle the randomness of the servers usage (those little spikes when you run multiple machines at one piece of hardware) so very well. But not every thing is virtualised, there is a monster of an active-active SQL Server cluster (since so much needs SQL) and we have a number of oddities such as the box which does media streaming due to the fact that some specialised hardware can’t be used in a virtual machine. A worry for when we started with Hyper-V was it's beta/rc status... Well with thousands of hours of uptime logged so far by servers on it, it has been ROCK solid.

Robert MacLean Tue, 07/01/2008 - 11:47

The Zen of Hosting: Part 8 - Microsoft Dynamics GP and Terminal Services

Submitted by Robert MacLean on Fri, 06/27/2008 - 08:39

For this instalment the product I am going cover is Microsoft Dynamics GP which is very interesting compared to MSCRM (part 6) and MOSS (part 7) in that it is not web based and thus a completely new challenge to expose in a web based hosting environment. For those who not know the architecture it is a Windows form application (not sure if it is .Net or WinAPI) but the GUI is a thin veil to a monster SQL database with so many tables and stored procs it is scary. So the normal way is that the user gets this client installed and the client directly connects to the SQL server. So if you thinking for hosting that you end up having to allow direct connections over the web to SQL, think again. The security risk of this just makes that a huge no. So after spending some time investigating the other people offering hosted GP, the solution everyone else seems if give you a server and let you remote in via Citrix. As this is a Microsoft end to end solution, Citrix is not an option, but Microsoft does have Terminal Services (TS) to compete and in Windows Server 2008 it can compete better than before. TS has always been this way to connect to a full session, which is nice, but we don't want nice, we want amazing. So the TS in Windows 2008 has a feature called Remote Applications.

Remote Apps lets an admin publish an application to a user, so it runs from a special icon, an MSI file (which you could deploy using AD or System Centre) or from a web site, and looks just like it is running on the users machine. In the background it is spawning a normal TS session on the server, starting the application and pushing the UI of the application only to the user. It's great as the user thinks it's on it's machine and it's super fast thanks to the lots of server power the application has and it is not fighting for resources on the client machine.

As this is the first version of this there is still some rough edges which need addressing. Firstly the application still runs on the server, so if you go File -> Open you browse the servers file system :( I know TS an share drives from the client to host, but the look like other drives not the normal C, D etc... users are expecting. What should be happening is that the admin should be able to disable the servers drives from being exposed and the clients drives are the only ones shown. The same should apply to printers. One advantage for GP is that working with the file system isn't a big requirement, but the printing is and that is less of a pain. The next area is security, it is still launching a TS session and that means if you want to allow a user to run a remote application, they end up needing login rights. I understand the technical requirements around this but there should be a way to separate people who will login via TS to the machine and those who just need Remote Apps on the terminal service level. Despite this Microsoft Dynamics GP looked like it was going to be difficult, but in the end was very easy to deploy.

The missing feature of remote desktop

Submitted by Robert MacLean on Tue, 03/11/2008 - 08:42

Remote desktop for Vista (and if you download the update for XP) has a great feature, it allows you to save the username/password combination so you don't have to type it in all the time. When I put the update for XP on, I was doing work for one of the large banks in the country and worried about what happens if they steal my laptop. If they can get in, they just need to open remote desktop to access various systems :(

Well Windows Server 2008 finally fixes this, but enforcing a rule which denies saved passwords. Meaning if you save or not, you have to retype.

Another great feature in 2008 is the ability to secure the remote desktop to use network level authentication, which means it is even more secure than normal. With the only requirement on the client being you have to run Vista.