0 comments

Troubleshooting Certificates and the Chain Build Process

Published on Monday, November 18, 2013 in ,

Recently I got a request of a customer to update the root certificates of several certificates they had in place. The problem was that one of the intermediate CA’s had an expiration date which was before the expiration date of the actual certificate. Here’s the information we got with this notification.

schema-brca2-server

The problem was the Belgium Root CA2. It’s valid until 27/01/2014 whilst several of the “your servercertificate (SSL)” are valid till the end of 2014. When clients would validate this chain after the 27th of January this would cause problems. With this news we received the new root and intermediate CA’s in a zip file.

Using certutil you can easily install them in the required stores on the server which has “your servercertificate (SSL)” configured for one or more services.

  • certutil -addstore -enterprise Root "C:\Temp\NewRootChain\Baltimore Cybertrust Root.crt"
  • certutil -addstore -enterprise CA "C:\Temp\NewRootChain\Cybertrust Global Root.crt"
  • certutil -addstore -enterprise CA "C:\Temp\NewRootChain\Belgium ROOT CA 2.crt"
  • certutil -addstore -enterprise CA "C:\Temp\NewRootChain\government2011.cer"

After performing these steps I could see the new chain reflected in my certificate on the server. Now I figured that the clients should retrieve this chain as well one way or another. Upon accessing https://web.contoso.com I could see that the certificate was trusted, but the path was still showing the old chain!

First thing I verified was that the “Baltimore Cybertrust Root” was in the trusted root certificate authorities of my client. Without me actually putting it there it was present. This makes sense as this probably comes with windows update or something alike. I assume the client has to retrieve the intermediate certificates himself. I thought that he would go externally for that. From the certificate I found the Authority Information Access URL which pointed to the (outdated) Belgium Root CA2 on an external (publicly available) URL. “AHAH” I figured, time to contact the issuers of these certificates. They kindly replied that if the server has the correct chain, the clients should reflect this. They also provided me an openssl command and requested more information.

This made me dig deeper. After a while I came to the following conclusion: my “bad” client showed different paths for these  kind of certificates… When visiting my ADFS service I saw the correct chain being build, but on my web server I had the old chain. Very odd. So something had to be wrong server side. From what I can tell here’s my conclusion:

The browser gets the intermediate certificates in the chain from the IIS server

  • IIS 8.0 on Windows 2012: update the stores and all is good (or the servers had a reboot somewhere in the last weeks that I’m unaware off)
  • IIS 7.5 on Windows 2008 R2: update the stores AND unbind/bind the certificate in the IIS bindings of your website(s).

For the IIS 7.5 I also tried an IIS reset, but that wasn’t enough. Perhaps a reboot would do too.  Here’s my source for the solution: http://serverfault.com/questions/238206/iis7-not-sending-intermediate-ssl-certificate

A usefull openssl command, even works for services like sldap. It will show you all certificates in the chain.

  • openssl.exe s_client -showcerts -connect contoso.com:636
  • openssl.exe s_client -showcerts -connect web.contoso.com:443

P.S. The new chain also has an oddity… Belgium Root CA2 is valid until 2025 whilst the Cybertrust Global Root expired 2020

Bonus tip #1: in the windows event log (of the client) you can enable CAPI2 logging. This will show you detailed information of all Certificate related operations. In my opinion the logging is often to detailed to tell you much, but it’s nice to know it’s there. You can find it under Application and Services\Microsoft\Windows\CAPI2 right-click Operational and choose enable.

Bonus tip #2: on Windows 2012/ Windows 8 you can easily open the certificates of both the current user and the current computer. In the past I often used mmc > add remove certificates > click some more > … Now there’s a way to open a certificates mmc for the local computer using the command line:

  • Current user: certmgr
  • Current computer: certlm

[Update 20/11/2013] forgot to mention the -enterprise switch in the certutil commands. This ensures the local computer certificate stores are used.

0 comments

Windows 2012 File Server: SRMSVC Events In Event Log

Published on Monday, October 14, 2013 in

We’re currently defining a new build for our file servers. On one of the servers we installed in the test environment we started seeing a lot of errors in the Application event log.

The events we were seeing:

Event 12344:

File Server Resource Manager finished syncing claims from Active Directory and encountered errors during the sync (0x80072030, There is no such object on the server.
).  Please check previous event logs for details.


Event 12339:

File Server Resource Manager failed to find the claim list 'Global Resource Property List' in Active Directory (ADsPath: LDAP://domaincontroller.contoso.com/CN=Global Resource Property List,CN=Resource Property Lists,CN=Claims Configuration,CN=Services,CN=Configuration,DC=contoso,DC=com). Please check that the claims list configured for this machine in Group Policy exists in Active Directory.

As you can see these were piling up real fast:

clip_image002

From what I can tell these started happing after we configured File Quota’s. In order to do this we added the File Server Resource Manager feature. A quick google led me to the following solution: in order to avoid this errors, a schema upgrade to Windows 2012 is required. Our domain is currently on 2008 R2. I didn’t performed the upgrade just yet, but I wanted to share this nonetheless.

My sources for this information:

0 comments

UAG 2010: The URL you have requested is too long.

Published on in

For a customer of mine we’ve setup a UAG which is configured as a Relying Party of an AD FS 2.0 server. This means the trunk itself is configured to use ADFS as it’s authentication server. It seems that upon accessing any application of this trunk we are redirected to the AD FS server, as expected, but UAG greets us with an error page containing "The URL you have requested is too long." For this setup we are publishing the AD FS server over that exact same trunk. So to be more precise, UAG is acting as an AD FS proxy as well.

UAG version in place: UAG 2010 SP3 U1

Here's some more background information regarding this specific issue: TechNet: UAG ADFS 2.0 Trunk Authentication fails: The URL you have requested is too long.

The error:

image

In words:

The URL you have requested is too long.

Navigate back and follow another link, or type in a different URL.

In the end we opened up a case with Microsoft and they came back with this registry key:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\WhaleCom\e-Gap\von\UrlFilter]
"MaxAllHeadersLen"=dword:00020710

In order to properly apply this setting:

  • Set the key
  • Activate the UAG configuration
  • Perform an IIS Reset

The actual value is for testing only, for a real production environment I would start with 8192 (bytes), watch out, the key is in HEX, and slowly move up until I feel I have a confortable marge.

0 comments

Quick Tip: AD FS Server Name as a Claim

Published on Tuesday, September 17, 2013 in

I’m not sure anyone else besides me finds this piece of information important, but sometimes I like to know which AD FS server issued the actual claims. That’s when multiple servers are joined to the ADFS farm of course. For instance when trying to find out whether the load balancing is acting like it should or just to make sure you are watching the event log or debug logs on the correct server. Here’s a simple way to do it. There might be other more elegant ways as well. If you have some I hope you drop a comment!

First I started by creating an additional attribute store:

image

The store is of the type SQL:

image

And here’s the connection string:

Server=\\.\pipe\MICROSOFT##WID\tsql\query;Database=AdfsConfiguration;Integrated Security = True

In my case I’m using the Windows Internal database instance used by the ADFS service. Whether to use WID or SQL for ADFS is a discussion which I will not touch here. By using the WID we can safely assume it’s available and accessible on all ADFS servers. If you were to use a SQL server instance that should be reachable from each ADFS server as well. Just update the connection string to use your remote SQL server instance in that case.

Now we’ll add the claim rules of our application to issue the ADFS server name:

image

As you can see by using the SQL query “Select HOST_NAME() As HostName” we can determine the hostname of the ADFS server issuing the claim. I’m not even sure “AS HostName” has to be in there. I just copy pasted this from some SQL blog ; ). That query will give you the hostname of the client talking to SQL, in this case the ADFS server. And here’s the result:

image

I am not saying it’s a good idea to have this rule active all the time as querying additional stores probably comes with a performance penalty, but it might be very convenient for test environments or for temporary situations.

5 comments

ADFS: Certificate Private Key Permissions

Published on Friday, July 26, 2013 in ,

Just as a reminder for myself. The following error might appear in the ADFS Admin log after a user being faced with the ADFS error page. The error is pretty cryptic and gives no real clues away.

Error event ID 364: Encountered error during federation passive request.

Additional Data

Exception details:
Microsoft.IdentityServer.Web.RequestFailedException: MSIS7012: An error occurred while processing the request. Contact your administrator for details. ---> Microsoft.IdentityServer.Protocols.WSTrust.StsConnectionException: MSIS7004: An exception occurred while connecting to the federation service. The service endpoint URL 'net.tcp://localhost:1501/adfs/services/trusttcp/windows' may be incorrect or the service is not running. ---> System.ServiceModel.EndpointNotFoundException: There was no endpoint listening at net.tcp://localhost:1501/adfs/services/trusttcp/windows that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details.

image

But after restarting the ADFS service an additional errors are shown:

Error event ID 102: There was an error in enabling endpoints of Federation Service. Fix configuration errors using PowerShell cmdlets and restart the Federation Service.

Additional Data
Exception details:
System.ArgumentNullException: Value cannot be null.
Parameter name: certificate
   at System.IdentityModel.Tokens.X509SecurityToken..ctor(X509Certificate2 certificate, String id, Boolean clone, Boolean disposable)
   at System.IdentityModel.Tokens.X509SecurityToken..ctor(X509Certificate2 certificate)
   at Microsoft.IdentityServer.Service.Configuration.MSISSecurityTokenServiceConfiguration.Create(Boolean forSaml)
   at Microsoft.IdentityServer.Service.Policy.PolicyServer.Service.ProxyPolicyServiceHost.ConfigureWIF()
   at Microsoft.IdentityServer.Service.SecurityTokenService.MSISConfigurableServiceHost.Configure()
   at Microsoft.IdentityServer.Service.SecurityTokenService.STSService.StartProxyPolicyStoreService(ServiceHostManager serviceHostManager)
   at Microsoft.IdentityServer.Service.SecurityTokenService.STSService.OnStartInternal(Boolean requestAdditionalTime)

And Event id 133: During processing of the Federation Service configuration, the element 'signingToken' was found to have invalid data. The private key for the certificate that was configured could not be accessed. The following are the values of the certificate:
Element: signingToken

This one is more descriptive. Here and there you see people saying that adding the ADFS service account to the local admins resolves this issue. Yeah I can imagine that, but that account is not supposed to have that kind of privileges! It’s sufficient to grant read (not even full control) to the private keys of the token signing and decrypting certificate. You can manage these by opening the mmc, adding the certificates snappin for the computer and browse the personal store.

image

1 comments

Quick Tip: Use PowerShell To Browse Through An Event Log

Published on in

When trying to troubleshoot AD FS claim rules, often I find myself going back and forth in the Security event log. But the interface doesn’t really allow to easily see whether the message is relevant or not. Here’s small PowerShell command, which probably can be optimized in many ways, that will print the last 60 (staring from the most recent) events that match the AD FS 2.0 Auditing source. Just press enter to go to the next event. Events are separate by a green dotted line.

get-eventlog Security -newest 60 | where-object {$_.Source -eq "AD FS 2.0 Auditing"}| % {write-host -foregroundcolor green "----------------------------------------------------";read-host " "; $_.message| fl}

image

Or even a bit more elaborate: a small script which allows you to go down, but also back up if you missed something:

$events = get-eventlog Security -newest 60 | where-object {$_.Source -eq "AD FS 2.0 Auditing"}|
$i = 0
while($i -lt $events.count -and $i -gt -1){
    write-host -foregroundcolor green "------------------$i-----------------------"
    $events[$i].message
    write-host ""
    write-host ""
    $direction = read-host "Continue? u(p) or d(own) [$default]"
    if($direction -eq $null -or $direction -eq ""){$direction = $default}
    if($direction -like "u"){
        $default = "u"
        $i--
    }
    else{
        $default = "d"
        $i++
    }
    $direction = $null
}

You can just copy paste this in a prompt, not even necessary to create a ps1 file for this. Although I can only encourage to modify this sample so you can easier find your needle in a haystack!

2 comments

SCCM: Task Sequence / Software Updates Paused

Published on in

Lately we had a ticket where a user was unable to execute task sequences from the Run Advertised Program console on his client. FYI, we’re running SCCM 2007 R2. The error the user was facing was this one:

This program cannot run because a reboot is in progress or software distribution is paused.

In the smsts.log file on the client (c:\Windows\System32\CCM\Logs\SMSTSLog\smsts.log) we saw the message “Waiting for Software Updates to pause”. So it seems that besides our Task Sequence we wanted to execute the client was also performing software updates in the background.

clip_image001[8]

In the UpdatesDeployment.log we found something alike “Request received – IsPasued” and “Request received – Pause”

clip_image001[10]

Somehow we couldn’t do much with this information. We hit a wall as we had no clue what updates were installing or why did they hang . So we continued our search. After some digging we found the following information in the registry:

clip_image001

So SCCM keeps track of the Task Sequence currently executing below HKLM\Software\Microsoft\SMS\Task Sequence. It will only allow one at a time. When comparing the registry entries with a working client we saw a small difference. The problem client didn’t had a “SoftwareUpdates” registry entry. As far as I can tell this is SCCM’s system of letting a Task Sequence know it can execute or not. In order to execute it needs two “cookies”. One for Software Distribution and one for Software Updates. If it has both, it means it has the necessary “cookies” to get started.

The actual value of the cookies can also be found in the following location: HLM\Software\Microsoft\SMS\Mobile Client\Software Distribution\State

clip_image001[4]

There we could see that indeed execute was paused as this entry had a value of 1. This was consistent with the error we were seeing in the Run Advertised Programs GUI. A lot of articles and blogs tell you to set it to 0 or delete it. We tried that, but it didn’t had any effects. And then I found the following forum post: http://www.myitforum.com/forums/Software-Updates-waiting-for-installation-to-complete-m221843.aspx With the information posted by gurltech I was able to perform the following steps:

Open wbemtest and connect to root\ccm\softwareupdates\deploymentagent

clip_image002

Execute the following query: select * from ccm_deploymenttaskex1

clip_image002[4]

If all goes well you find an instance

clip_image002[6]

And now check the AssignmentID property

clip_image002[8]

This ID can be used to track down the Deployment so called being “in progress”. When opening the “Status for a deployment and computer” report. And providing the id we just found and the computer name, we couldn’t find any updates to be installed or failed.

clip_image002[10]

So I figured using the script to clear the deployment task from WMI couldn’t hurt much. Either on a next software update cycle scan it would reappear, or it would be gone forever. And indeed, after setting the ID’s (AssignmentId and JobId) to 0 and recycling the SCCM client service we were able to execute Task Sequences again on that client. This situation might be very rare to run into, but I think it might inform you of some insights as to how SCCM works.

4 comments

AX 2012: Validate Settings Fails for Report Server Configuration

Published on Tuesday, July 9, 2013 in

Setting up AX 2012 Reporting involves installing SQL Reporting Services and registering that Reporting Services installation in AX. One of the issues we were having is that we were seeing some problems deploying reports. In order to troubleshoot we tried the Test-AxReportServerConfiguration cmdlet.

image

This cmdlet was telling us: the report server URL accessible: False Hmm. That’s odd. We’re pretty sure that all involved URLs (Report Server Manager & Report Server Service) were properly resolving and responding. When double checking the AX Report Server configuration within the AX Client we tried the validate settings button:

clip_image003

However, we stumbled upon the following error:

clip_image005

In words:  Exception has been thrown by the target of an invocation. The SQL Server Reporting Services server name RPRTAX1B.contoso.com does not exist or the Web service URL is not valid.

As it kept complaining about the URL I started to suspect what could be the root cause. From earlier experiences (Dynamics Ax 2012: Error Installing Enterprise Portal)  I know that not all AX components can properly handle host headers. Because this is how our SQL Reporting Services host header configuration looks like for the Report Server URL:

clip_image001

Jup, we got multiple entries. The reason is somehow historical and not relevant here. It seems that AX, when validating the settings, checks whether the Report Server URL matches with the first host header in the SQL Reporting Services configuration. So I went ahead, removed all entries but the good one, ok-ed and applied. After that I re-added them. This ensured the URL AX knows of was on top of the list. And jup, everything started working!

A colleague from the AX team showed me which code was performing this check. Here’s the offending code:

public boolean queryWMIForSharePointIntegratedMode(str serverName, str _serverUrl)

{

    boolean result = false;

    try

    {

        result = Microsoft.Dynamics.AX.Framework.Reporting.Shared.Proxy::QueryWMIForSharePointIntegratedMode(serverName, _serverUrl);

    }

    catch (Exception::CLRError)

    {

        // We must trap CLRError explicitly, to be able to retrieve the CLR exception later (using CLRInterop::getLastException() )

        SRSProxy::handleClrException(Exception::Error);

        result = false;

    }

    return result;

}

And that’s how I come to part two. When creating Report Server configurations within AX, one might be wondering how to register a load balanced Reporting Services setup…

Here’s the configuration extract of the server name & URLs for such a configuration. Now how do we handle the fact that there’s 2 Servers and one (virtual) load balanced URL?

clip_image007

In a load balanced setup with 2 reporting servers you’ll typically have 3 configurations FOR EACH AOS instance:

  1. RSServerA(Default Configuration: unchecked)
    1. Server name: ServerA
    2. Report Manager URL: axreports.contoso.com/reports
    3. Web service URL: axreports.contoso.com/reportserver
  2. RSServer B (Default Configuration: unchecked)
    1. Server name: ServerA
    2. Report Manager URL: axreports.contoso.com/reports
    3. Web service URL: axreports.contoso.com/reportserver
  3. RSVirtualServer (the Load Balancer) (Default Configuration: checked)
    1. Server name: ServerA
    2. Report Manager URL: axreports.contoso.com/reports
    3. Web service URL: axreports.contoso.com/reportserver

Now the clue is in the server name: this is the name which is being used to contact the actual Windows server for certain information. Like in the code above, the server will be contacted over WMI to read the requested setting. If you were to enter “axreports.contoso.com” as a servername, you’ll be seeing all kinds of errors. For starters typically your load balancer only balances port 80 or 443, but WMI uses other ports. So these connections will fail. As far as I learned from my AX colleague, the AOS instance can use the load balancer configuration entry, and you can use the node configuration for your report deployments. In that way, the server name probably doesn’t matter that much on the load balancer configuration item.

I hope I don’t sound to cryptically, if you like any further explanation, feel free to comment.

1 comments

Windows 2012 R2 Preview: Web Application Proxy Installation Screenshots

Published on Thursday, June 27, 2013 in ,

For those interested in the look and feel of the new Web Application Proxy role, here’s some screenshot of a fairly simple next next finish setup.

The installation:

image

image

image

image

image

image

image

image

Remark: seems I’ll have to add a server to my lab environment

image

image

image

image

The Configuration:

image

image

image

image

image

The Management Console:

Open the Remote Access Management Console

image

The Publish New Application Wizzard:

Remark: read the explanation of the ADFS selection bullet, it’s fairly descriptive.

image

image

Seems like basic internal <> external stuff.

image

Remarks:

  • Active Directory Federation Services and Web Application Proxy can’t be combined on one server.
  • Active Directory Federation Services is to be installed in your domain before you can install the Web Application Proxy as you need to specify it.
  • Selecting Pass-Through on the Preauthentication screen will skip the Relying Party selection and then your application will handle the authentication. This will break your users their SSO experience though.

3 comments

Windows 2012 R2 Preview: Active Directory Federation Services Installation Screenshots

Published on in ,

Just for those interested, here’s the screenshots of the ADFS installation on a Windows 2012 R2 Preview installation. Before 2012 R2 it wasn’t advised to install ADFS on a domain controller as the ADFS solution relied on IIS. But with the 2012 R2 version the IIS dependency is gone and Microsoft recommends installing ADFS on domain controllers. I think this will lower the bar for a lot of companies. Also the enhanced authentication options (multi factor) seem really promising.

The installation:

image

image

image

image

image

image

image

Remark: in the end my system didn’t need to reboot

image

image

image

The configuration:

image

image

image

Remark: small sidestep here: obviously I want to use Group Managed Service Accounts!

image

image

Remark: lab only procedure: ensures Group Managed Service Accounts are available immediately

image

image

image

image

image

image

The management console with the focus on the new Authentication Policies section

image

A new Relying Party Trust type:

If I read the explanation correct this will allow you to publish non claims-aware application over the new Web Application Proxy role.

image

Remarks:

  • The option for a stand alone ADFS server is no more. Either you install a single node farm or you install a real farm. Makes sense to me.
  • You still have the option to choose between a Windows Internal Database or a dedicated SQL Server database. This might be a hard choice. I’m not sure I’m happy to have Internal Databases running on my domain controllers. The SQL on the other hand requires a cluster for proper availability which might be quite expensive to sell to your customers.
  • Named certificate forces you to take the subject of the certificate as the Federation Service Name. Wildcard certificate allows you to pick freely as long as the wildcard is respected. It seems you can have additional . in the wildcard part though. I advise against this as you’ll probably face certificate validation errors in your browser. Example: *.realdolmen.com allows you to select sts.sub.realdolmen.com.
  • If you want to compare=: the Windows 2012 ADFS installation: vankeyenberg.be: ADFS Part 1: Install and configure ADFS on Windows 2012
  • The Authentication Policies section in the management console seems awesome. Very clear and it seems very easy to manage.