Help files (CHM) don't work

Submitted by Robert MacLean on Tue, 07/08/2008 - 07:31
So in Vista and/or IE8 there seems to be an issue with CHM files which you have downloaded from the Internet, where they just do not load their content.

I stumbled across this with the HMC 4.5 documentation which is presented in, you guessed it, CHM help files. Just like when I found the MSDN Library Broken this is caused by a security feature trying to protect you. To resolve this you need to close the file, right click and select properties. On there in the bottom right hand corner is a nice little Unblock button which once clicked you can then use the file normally.

The Zen of Hosting: Part 11 - DNS

Submitted by Robert MacLean on Tue, 07/08/2008 - 07:30

The last of the hurdles to overcome for the deployment was the running of the DNS server. This is because we run on a private IP range internally and use ISA to match external IP's and ports to the services we want to publish (i.e. NAT). This basically allows us to lower the attack surface because we only let out what is needed and an also mix and match servers to the same IP (lowering our IP address usage).

This also means that we have not only DNS servers to allow the servers and staff internally to find the other servers and services but we also have to have external servers too to allow users on the big bad Internet to find them. There is so much duplication of work for this configuration deployment scenario as you are having to create records on a best case of two servers and worst case is four servers and configure them differently. This also means the area for mistake is increased considerably. The upside is that internal staff do not need to go out the LAN and back in via the net or even go through the external firewalls and that we an have different domain names internally and externally, which is great for testing and development and only publishing when needed.

What I do not understand is why the DNS server team at Microsoft can't take a leaf from MSCRM 4.0's IFD deployment and allow you to specify what the internal IP range is and allow you to set A/CNAME’s for both internal IP ranges and external IP ranges. So when an internal IP requests a resolution it gives the internal A/CNAME records and for non-Internal they get the external A/CNAME record. This is such a logical thing to do, that Bind has this feature for ages, so come on Microsoft steal another idea from Linux ;)

One of the design choices for the DNS structure is a concept of mine called IP address abstraction. The idea of DNS is to get us away from IP’s but the problem is that in normal DNS configurations you end up with loads of A records and the moment you need to change IP addresses you end up with spending days changing IP addresses through all the records. What IP address abstraction is that you take a core domain name, and create a single A record for each IP you have.

Examples:

  • internal1.test.com A 192.168.0.1
  • internal2.test.com A 192.168.0.2

What you do then is everywhere else you use CNAMEs to those names, regardless of what the domain name.

Example:

The advantage is that if the IP’s change ever, you change them in one place and it reflects everywhere, yet the experience to the end user is the exact same as DNS has always been.

Great Hyper-V Post

Submitted by Robert MacLean on Fri, 07/04/2008 - 07:51
Great post I found on the blogs about 5 key things to know about for Hyper-V, thought it may be of use to someone following my previous post on Hyper-V. Definitely don’t agree with their number one point that for serious usage it should be run on Core, we because of the reasons in this post. Anyway you can read more at Top 5 things to know about Hyper-V

The Zen of Hosting: Part 10 - Windows 2008 Core

Submitted by Robert MacLean on Fri, 07/04/2008 - 07:49
Since we had Windows 2008 we just had to try out Core edition, which is the version of Windows where Microsoft promised everything would be command line based. I like to think of it, that if Vista stole the UI from Apple Mac, then Win2k8 tried to steal it from Linux...

So before I get into core, let me first state that Win2k8 is the best server OS Microsoft has ever released. It is amazing how well polished everything is, and the tools that are there are great. Does it compare to Linux servers, well in some places it kicks ass and others it doesn’t, but since Linux servers are the de facto for command line based systems if we compare the command line features then they have done a HORRIBLE job.

All that is actually happening is you are getting the normal command prompt in a Windows and they dropped Explorer.exe from being the shell. In fact explorer.exe does not even get installed, but a lot of our old favourites are there, such as Ctrl+Alt+Del still brings up the usual menu and task manager still works.

Actually Microsoft dropped so much the gain in RAM is impressive (our avg RAM usage normally is 750Mb but on core it is a mere 300Mb) and the attack surface and patch requirements shrinkage is great.

Getting back to command.com as the shell, is likely the biggest single mistake of core.It’s not like Microsoft doesn’t have a great command line system, called Powershell which they could have used. In fact there is so little added to the command line that after this experience I went to a Win2k3 machine and was able to do most of this anyway, and it’s not hard to kill explorer.exe as the shell in Win2k3. One advantage doing this core mockup on 2k3 has, is that at least Internet Explorer is there for you to get online to get help, Win2k8 core has no decent help (just the same old crappy command.com stuff).

Linux has man pages, Powershell has get-help, the console has... Thank the heavens that I was able to use my laptop to get on to the Internet. For example I had problems with the first two core boxes trying to run Hyper-V on them, it just gave all kinds of RPC issues. Turned out that although I had not set the DNS correctly using netsh, I had set it for Primary Only and not Both. What the difference is beyond me because using the Windows GUI to set network settings for the last 20 years obviously sets this correctly so why make it so much tougher.

Another interesting feature of core, which I never had to it my head with but learnt about when I attended Win2k8 IIS training that Microsoft ran and the trainer said that in Core you couldn't run ASP.NET for web sites, because Core doesn't have the .Net framework. This is because the .Net framework installer needs a GUI. I suspect this is the same reason why Powershell can't be used, being .Net based and all. But the part I don’t understand is that THERE IS A FRIGGING GUI! It's all around the command prompt Window!

My recommendation is avoid Core as the extra work doesn’t make up for the cost of a little bit of extra ram, rather spend less time on setting up the server, more time billing customers and buy the ram. Hopefully in Windows Server vNext gets it right.

The Zen of Hosting: Part 9 - Hyper-V

Submitted by Robert MacLean on Tue, 07/01/2008 - 11:47

As I approach the end of this series I want to highlight some of the technology that the hosting machine is built on and some of the experiences I learnt with that. These last few posts are much shorter than the earlier ones but hopefully provide some quick bite size info.

So if you have looked at standard HMC then add all the technology we have added to it, you would assume there is a building full of servers. The reality is the server room has got lots of space and isn’t that big. How did we achieve this? Slow applications because we running everything on a fewer servers? Not at all.

We bought some seriously powerful HP machines loaded a ton of ram and installed Windows 2008; but how does that help with running lots of systems and doesn't HMC break if it runs on Win2k8 (see way back to part 2)? Well Win2k8 has the best virtualisation technology Microsoft has ever developed, named Hyper-V. This is seriously cool stuff in that it actually runs prior to Windows starting and virtualises Windows completely (rather than running virtual machines on an OS, they run next to it). The performance compared to Virtual Server is not even worth talking about, it basically pushes Virtual Server into the stone age.

It is very fast and it seems to handle the randomness of the servers usage (those little spikes when you run multiple machines at one piece of hardware) so very well. But not every thing is virtualised, there is a monster of an active-active SQL Server cluster (since so much needs SQL) and we have a number of oddities such as the box which does media streaming due to the fact that some specialised hardware can’t be used in a virtual machine. A worry for when we started with Hyper-V was it's beta/rc status... Well with thousands of hours of uptime logged so far by servers on it, it has been ROCK solid.

Speaking at Tech-Ed Africa

Submitted by Robert MacLean on Fri, 06/27/2008 - 08:46

I can now officially let out one of my many secrets, which is I am speaking at Tech-Ed Africa this year! Oddly enough I am speaking about something I have never blogged about, WPF and building business applications with it. I will be co-speaking with a good friend Simon (from Blacklight) who is an amazing designer. It will be a very fun talk. For more details see the Tech-Ed Africa site at http://www.tech-ed.co.za

The Zen of Hosting: Part 8 - Microsoft Dynamics GP and Terminal Services

Submitted by Robert MacLean on Fri, 06/27/2008 - 08:39

For this instalment the product I am going cover is Microsoft Dynamics GP which is very interesting compared to MSCRM (part 6) and MOSS (part 7) in that it is not web based and thus a completely new challenge to expose in a web based hosting environment. For those who not know the architecture it is a Windows form application (not sure if it is .Net or WinAPI) but the GUI is a thin veil to a monster SQL database with so many tables and stored procs it is scary. So the normal way is that the user gets this client installed and the client directly connects to the SQL server. So if you thinking for hosting that you end up having to allow direct connections over the web to SQL, think again. The security risk of this just makes that a huge no. So after spending some time investigating the other people offering hosted GP, the solution everyone else seems if give you a server and let you remote in via Citrix. As this is a Microsoft end to end solution, Citrix is not an option, but Microsoft does have Terminal Services (TS) to compete and in Windows Server 2008 it can compete better than before. TS has always been this way to connect to a full session, which is nice, but we don't want nice, we want amazing. So the TS in Windows 2008 has a feature called Remote Applications.

Remote Apps lets an admin publish an application to a user, so it runs from a special icon, an MSI file (which you could deploy using AD or System Centre) or from a web site, and looks just like it is running on the users machine. In the background it is spawning a normal TS session on the server, starting the application and pushing the UI of the application only to the user. It's great as the user thinks it's on it's machine and it's super fast thanks to the lots of server power the application has and it is not fighting for resources on the client machine.

As this is the first version of this there is still some rough edges which need addressing. Firstly the application still runs on the server, so if you go File -> Open you browse the servers file system :( I know TS an share drives from the client to host, but the look like other drives not the normal C, D etc... users are expecting. What should be happening is that the admin should be able to disable the servers drives from being exposed and the clients drives are the only ones shown. The same should apply to printers. One advantage for GP is that working with the file system isn't a big requirement, but the printing is and that is less of a pain. The next area is security, it is still launching a TS session and that means if you want to allow a user to run a remote application, they end up needing login rights. I understand the technical requirements around this but there should be a way to separate people who will login via TS to the machine and those who just need Remote Apps on the terminal service level. Despite this Microsoft Dynamics GP looked like it was going to be difficult, but in the end was very easy to deploy.

When to use what

Submitted by Robert MacLean on Mon, 06/23/2008 - 18:56

The official MSCRM blog (http://blogs.msdn.com/crm – which I knew out of my head. I am aware I need to get a life) seldom excites me because they are still so far behind what companies, like where I currently work, but Amy Langlois really did shine today. So forgive this +1 post and if you are a Microsoft Dynamics CRM developer working with the SDK you must read it because it contains vital information on changes they are making to the SDK assemblies and when/what you should be using in your code.

Read all about it at: Web Services & DLLs or What’s up with all the duplicate classes?

The official way to change MSCRM ports

Submitted by Robert MacLean on Mon, 06/23/2008 - 18:42
Finally Microsoft have released a support article which details how to change the MSCRM web site ports correctly. This also fixes the workflow doesn’t work et al issues (see here). This is great to have, not because details a third way to do it (so besides the SQL edit and IFD tool), but that it actually fills in the gaps for everything else you need to do such as reconfiguring Outlook clients and data migration tools etc…

You can read it at http://support.microsoft.com/kb/947423/