Skip to main content

Team Foundation Server could not resolve the user or group

One of my recent tasks was to setup a TFS 2008 server, and migrate our VSS system across to it. Once done setup the projects and users. Well since I have a good knowledge of the systems and I did a TFS 2005 deployment previously (although it was not adopted), I felt confident that the install wouldn’t be an issue. I did the usual prep of reading blogs and learning from others and that did help me avoid some pit falls.
Next up was the migration of VSS to TFS, which was actually not a major requirement as it is just there for legacy projects. All active projects would have to check their code into new TFS projects planned to create in TFS. The key benefit of this is it would allow us to align with EPM better than the migration tool would allow us to. I created a project, and imported the 1.7Gb of source code into it! It took some time. Then I needed to add the users, and this is where I met a problem.
Regardless if I used the command line, or the TFS admin tool or the GUI I kept getting an error: Team Foundation Server could not resolve the user or group. <AD Distinguished Name of the User>. The user or group might be a member of a different domain, or the server might not have access to that domain. Verify the domain membership of the server and any domain trusts.



The AD issue and TFS issue both revolved around the fact that in our AD the Authenticated Users (AuthUsers) group does not have read permissions to our users and the containers they are in. This is odd to the outside person because when AD is setup the AuthUsers group does have permissions, so why would our AD be different and what are the implications of changing it. The reason there is a difference is because our AD is setup according to Hosted Messaging and Collaboration (you can read more about it here) which specifically removes the AuthUsers group permissions for security reasons (i.e. to prevent users from seeing other customers). Because of this change, the GPO could not access the users accounts and neither could TFS read from AD what it needed.
To solve this for TFS meant giving AuthUsers read permissions to the users who needed to access TFS and their immediate container while for AD/GPO it required just AuthUsers to have permissions on the container for the users (it doesn’t need the permissions on the actual users) and all it’s parent containers. Once those were done the group policies and TFS started to work 100%.
That’s great but what is the impact to the hosted environment and is this the best way to solve the issue? Well this does open up a security risk in that customers could see other customers, simply by logging into the domain. For us this is mitigated as we are not offering hosted TFS, this is just for our own internal staff who are aware of who our customers are and we aren’t worried if our customers know about our staff. It is also very difficult for a customer to see other customers as most applications don’t allow it and those that do allow it in their standard configurations, such as MSCRM, ignore it in a HMC environment.
In regards to is this the best way to solve the issue, my view is that it is not it. You should run a separate AD for each customer, this is a normal AD system which runs at the client premises and using the Customer Integration component of HMC (which is based on MIIS) sync the customer AD to the hosted AD. This means that you could run GPO’s and TFS on the customer site without the need to change anything in a hosted way.

Complex Reporting: Part 1 - Introduction

Recently I got the chance to build a report using Microsoft SQL Server Reporting Service 2005 for a customer, that in itself is nothing fantastic. However this was one of the most complex reports I have ever had to build, for two main reasons:
  1. What needs to be displayed is so simple in concept, that you get misled into thinking it’s easy.
  2. To do this you need to use a component of SRS which is seldom needed, sub-reports.
But before I give away too much let me explain what the report should show, the customer I am using in this series, tracks the deals for commodities over a three year period, broken down to each quarter. So they want to know in year 1, quarter 1 they sold x values worth of beer, and so on for all 12 quarters. As they are constantly adding new commodities the year 1 is not the same for each commodity. So they may start selling beer in 2008 and chickens in 2009, so they would only sell beer until 2011 but chickens would be sold until 2012. And all we need is a report to show this.
Disclaimer: Most of this has been changed to protect the innocent, so the report images are drawn in Visio and are not real images, the customer doesn’t sell commodities (especially not coffee and beer) and the data is completely faked.

So lets look at the data structure, it’s a simple single table (no I didn’t have anything with the design of this), which has a Deal field for the name of commodity; Fiscal1, Fiscal2, Fiscal3 store the years that relate to each year we are tracking and the FxQxValue is the value for that year and quarter.

Lets look at how this would look with sample data, this is the same data we will use for the rest of the series:

Next time we will look at the report requirements itself and start to look at how to build it.

IP Address Abstraction, should you use it?

Previous I blogged about a concept called IP address abstraction, IAA for simplicity, (see Zen of Hosting pt 11) where I wrote about using CNAMEs in DNS to abstract yourself away from having lots of IP addresses and needing to update lots and lots of DNS records should your IPs change. It seems like a good idea, however no good idea seems to be a perfect fit in IT anymore :(
In this case the biggest issue is that according to Common DNS Operational and Configuration Errors (RFC 1912, for those who care) states a few issues and many an admin may point out that this is the cause for all kinds of things like email breaking, but as we will see that may not be the case. But lets cover the highlights from RFC 1912 which will be pointed out to you:
A CNAME record is not allowed to coexist with any other data. However, DNS    servers like BIND will see the CNAME and refuse to add any other resources for that name.  Since no other records are allowed to coexist with a CNAME, the NS entries are ignored.  Therefore all the hosts in the podunk.xx domain are ignored as well!
That’s a big one since if you use IAA it will co-exist with MX, NS etc.. it also goes on to say
Don't go overboard with CNAMEs.  Use them when renaming hosts, but plan to get rid of them (and inform your users).  However CNAMEs are useful (and encouraged) for generalized names for servers -- `ftp' for your ftp server, `www' for your Web server, `gopher' for your Gopher server, `news' for your Usenet news server, etc.
[RFC 1034] in section 3.6.2 says this should not be done, and [RFC 974] explicitly states that MX records shall not point to an alias defined by a CNAME.  This results in unnecessary indirection in accessing the data, and DNS resolvers and servers need to work more to get the answer.
This basically goes against everything IAA identifies as a reason for using it :( Lastly it goes on to state
Also, having chained records such as CNAMEs pointing to CNAMEs may make administration issues easier, but is known to tickle bugs in some resolvers that fail to check loops correctly.  As a result some hosts may not be able to resolve such names.
Having NS records pointing to a CNAME is bad and may conflict badly with current BIND servers.  In fact, current BIND implementations will ignore such records, possibly leading to a lame delegation.  There is a certain amount of security checking done in BIND to prevent spoofing DNS NS records.  Also, older BIND servers reportedly will get caught in an infinite query loop trying to figure out the address for the aliased nameserver, causing a continuous stream of DNS requests to be sent.
Basically stating it may make administration issues easier, is kind of the point of all this. However there is a few things that the wiley admins may not point you to, first off this was published in Feb '1996! That’s 12 years ago, since then the superior DNS software like BIND, no longer has the issues that are stated, and to that point even the inferior DNS software like that which ships with Windows doesn’t show these issues. Basically that nulifies the first and last points but what about that bit in the middle pointing to RFC 1034 and RFC 974.
Well RFC 974 deals with MX records and routing, so it is similar to the first point but does state:
If the response contains an answer which is a CNAME RR, it indicates that REMOTE is actually an alias for some other domain name. The query should be repeated with the canonical domain name.
So basically even if you chain CNAME’s it should not break any email system. RFC 1034 is more about DNS (it’s actually called DOMAIN NAMES - CONCEPTS AND FACILITIES) and covers the overview of how should work without covering the technical details. However it was written in Nov of 1987 (so even older than RFC 1912) but is not obsoleted by any other RFC. It states:
Of course, by the robustness principle, domain software should not fail when presented with CNAME chains or loops; CNAME chains should be followed and CNAME loops signalled as an error.
Basically that DNS should be robust and that the idea of IAA should work regardless. The one issue I cannot disprove is that it takes additional time and bandwidth to have lots of CNAMEs. Then again in 1996 56k was the blinding speed of the internet, now that is not the case. Bandwidth has increased and latency decreased so much since then, that it makes sense to utilize that additional power to make a more stable internet through the use of making administration easier. Hopefully we can soon get some tools to test for loops which are the biggest issue caused by this structure.
Looking at all of this I would state that IAA is worth implementing and there is not a significant reason anymore not to utilize it. Hopefully this document should help answer any questions or be of use when dealing with those admins who haven’t seen the light.

HMC tips for Exchange: Part 3 - Fixing GAL issues

It’s an unfortunate problem that the GAL integration isn’t rock solid with HMC and Exchange, and that it is merely controlled by the AD schema attributes (see The Zen of Hosting: Part 5 – HMC and Exchange for more info), and it’s very for this to be screwed up by a number of things. Most common for me is the use of the Exchange PowerShell which seems to reset that attribute with a lot of it’s commands. The easiest way to resolve it is with another XML request passed to provtest, in this case the nicely named RepairExchangeObject. Basically it just needs the domain controller and the LDAP path to the user who has had their attributes screwed and off it goes and fixes it.
NOTE: This is for HMC 4.0, 4.5 has a different structure completely. Check the SDK for the message which will give you a sample you can use.
The request looks like this:
<request> 
    <data> 
        <preferredDomainController>Domain Controller</preferredDomainController>
        <path>LDAP Path</path>
    </data>
    <procedure> 
        <execute namespace="Exchange 2007 Provider" procedure="RepairExchangeObject" impersonate="1" > 
            <before source="data" destination="executeData" mode="merge" />
            <after source="executeData" destination="data" mode="merge" />
        </execute>
    </procedure>
</request>

Sample:

<request> 
    <data> 
        <preferredDomainController>srv01</preferredDomainController>
        <path>LDAP://[email protected],OU=MyCustomer,OU=MyReseller,OU=Hosting,DC=litware,DC=local</path>
    </data>
    <procedure> 
        <execute namespace="Exchange 2007 Provider" procedure="RepairExchangeObject" impersonate="1" > 
            <before source="data" destination="executeData" mode="merge" />
            <after source="executeData" destination="data" mode="merge" />
        </execute>
    </procedure>
</request>

HMC tips for Exchange: Part 2 - Adding a distribution list

The second tip is distribution lists, which are also kind of an important thing to get set up. To do this you need to craft an CreateDistributionList XML request, this is just an XML file which looks like:

<request> 
    <data> 
        <container>LDAP Path</container>
        <preferredDomainController>Domain Controller</preferredDomainController>
        <managedBy>List Owner</managedBy>
        <name>List Name</name>
    </data>
    <procedure> 
        <execute namespace="Hosted Email 2007" procedure="CreateDistributionList" impersonate="1" > 
            <before source="data" destination="executeData" mode="merge" />
            <after source="executeData" destination="data" mode="merge" />
        </execute>
    </procedure>
</request>

Sample
<request> 
    <data> 
        <container>LDAP://OU=MyCustomer,OU=MyReseller,OU=Hosting,DC=litware,DC=local</container>
        <preferredDomainController>srv01</preferredDomainController>
        <managedBy>[email protected]</managedBy>
        <name>Triage</name>
    </data>
    <procedure> 
        <execute namespace="Hosted Email 2007" procedure="CreateDistributionList" impersonate="1" > 
            <before source="data" destination="executeData" mode="merge" />
            <after source="executeData" destination="data" mode="merge" />
        </execute>
    </procedure>
</request>

You can then run that on your HMC server using the provtest command. So how do you manage who actually is in the list? Well this actually very easy, thanks to Outlook. First just open an email and type the list name in to line, then right click and select properties:

 

 

You can then use the Modify Members… button to add/remove members of this list.

 

 

Note: This can ONLY be done my the list owner which you specified when you created the list in the managedBy node.

HMC tips for Exchange: Part 1 - Adding a Room

First of a new series, but this is more a mini series (just three parts). It is just a follow up on the last series Zen of Hosting so it focuses on a few tips for working with HMC. All this series is from HMC 4.0, so on 4.5 your mileage may vary. The first one is how to add a room, because meeting scheduling is kind of important. To do that, first add a user via the normal UI (i.e. the web portal), from this point it’s actually normal stuff for adding a room. Firstly go into AD user and groups and disable the user, then go into the Exchange management console and add a room to your existing (disabled) user and viola done. The GAL and other Exchange/AD stuff is maintained because the user was added via the HMC way.

The Zen of Hosting: Part 12 - Server Naming

For a consistent environment you need naming standards, but the idea of a standard is a universal adherence and in IT there is no such thing. The first thing I started to look at is a naming standard for the servers themselves. Thankfully Microsoft has published a recommendation on this (available here) which is what we decided to follow since it is a simple one and is easy enough to use and remember.

Microsoft's published recommended naming convention which is aa-bbb-ccccc-dd. The definition of the format is aa is the country code, bbb is the city designation, ccccc is the server role and dd is the number of the server. If the server is part of a cluster, array or similar then last two characters of the server role indicate which cluster it is part of.

Samples:

The first domain controller in Redmond, USA would be: us-rmd-ad-01

  • us = USA
  • rmd = Redmond
  • Ad = Active Directory
  • 01 = First Server

 

The first BizTalk server in the second BizTalk cluster in Cape Town, South Africa would be: za-cpt-bts02-01

  • Za = South Africa
  • Cpt = Cape Town
  • Bts02 = BizTalk Cluster 2
  • 01 = First Server

 

The first MSCRM server in Auckland, New Zealand would be: nz-ack-crm-01

  • nz = New Zealand
  • ack = Auckland
  • infr = Infrastructure
  • 01 = First Server

However this is the only published naming standard I could find, so the naming for databases, ISA rules etc… have all been developed internally so I can’t disclose those.

This also brings to an end this series on HMC hosting, but fear not I have a quick 3 part mini-series on the top 3 tips I have for managing a HMC environment to keep busy with.

Hyper-V and Win2k3 Network Card

So if you are installing a Win2k3 machine as a Hyper-V machine you will find out you need to install Service Pack 2 prior to installing the integration components (this is the stuff which increases performance, allows the mouse to work etc…). So besides putting in a DVD and accessing that there isn’t much option, because the network card it is configured with by default doesn’t work until the integration components are installed.

The solution is to remove that network card and add a Legacy Network Adapter under the Add Hardware section.

This network card works regardless of integration components, but is not as fast. So once you are up and running don’t forget to change it back.

Help files (CHM) don't work

So in Vista and/or IE8 there seems to be an issue with CHM files which you have downloaded from the Internet, where they just do not load their content.

I stumbled across this with the HMC 4.5 documentation which is presented in, you guessed it, CHM help files. Just like when I found the MSDN Library Broken this is caused by a security feature trying to protect you. To resolve this you need to close the file, right click and select properties. On there in the bottom right hand corner is a nice little Unblock button which once clicked you can then use the file normally.

The Zen of Hosting: Part 11 - DNS

The last of the hurdles to overcome for the deployment was the running of the DNS server. This is because we run on a private IP range internally and use ISA to match external IP's and ports to the services we want to publish (i.e. NAT). This basically allows us to lower the attack surface because we only let out what is needed and an also mix and match servers to the same IP (lowering our IP address usage).

This also means that we have not only DNS servers to allow the servers and staff internally to find the other servers and services but we also have to have external servers too to allow users on the big bad Internet to find them. There is so much duplication of work for this configuration deployment scenario as you are having to create records on a best case of two servers and worst case is four servers and configure them differently. This also means the area for mistake is increased considerably. The upside is that internal staff do not need to go out the LAN and back in via the net or even go through the external firewalls and that we an have different domain names internally and externally, which is great for testing and development and only publishing when needed.

What I do not understand is why the DNS server team at Microsoft can't take a leaf from MSCRM 4.0's IFD deployment and allow you to specify what the internal IP range is and allow you to set A/CNAME’s for both internal IP ranges and external IP ranges. So when an internal IP requests a resolution it gives the internal A/CNAME records and for non-Internal they get the external A/CNAME record. This is such a logical thing to do, that Bind has this feature for ages, so come on Microsoft steal another idea from Linux ;)

One of the design choices for the DNS structure is a concept of mine called IP address abstraction. The idea of DNS is to get us away from IP’s but the problem is that in normal DNS configurations you end up with loads of A records and the moment you need to change IP addresses you end up with spending days changing IP addresses through all the records. What IP address abstraction is that you take a core domain name, and create a single A record for each IP you have.

Examples:

  • internal1.test.com A 192.168.0.1
  • internal2.test.com A 192.168.0.2

What you do then is everywhere else you use CNAMEs to those names, regardless of what the domain name.

Example:

The advantage is that if the IP’s change ever, you change them in one place and it reflects everywhere, yet the experience to the end user is the exact same as DNS has always been.