Blog from January, 2018


Several years ago, Colorado School of Mines was (I would expect) like many in the Internet2 community – we were aware of efforts around something called TIER, and it had to do with Trust and Identity.But we had a vendor solution around identity and access management that served us (relatively) well, we were members of InCommon, relied on Shibboleth, and we didn’t quite understand where TIER fit in. At some point we even became aware that there were member institutions investing time (and money!) into this effort – but again, it was something on the radar – something that we would pay more attention to “someday”.  That day came in mid-2017, when it became clear that we faced the unenviable task of replacing our vendor IAM solution. As one option, we reached out to Internet2 and inquired about TIER, and about what it might take to become a part of the investor program. As luck (would continue) to have it, we were told the initial investor program was coming to a close.  However, the inquiry would prove serendipitous, as we were told there was a new opportunity that would soon be announced – the TIER Campus Success Program. 

As we learned more about the goals of the Success Program – and its core approach built on the collaboration of other institutions with a common need and goal, our local IAM architect was sold.  We are a (relatively) small central IT organization that is known (as my predecessor noted) for “fighting above our weight class”.  The thought of being in the ring with others who shared our need, had similar goals, and was working alongside us in those efforts was a reassuring one.  As it turned out, we were fortunate to be accepted into the program and joined nine other institutions in this collaborative effort.

Adoption and implementation of the TIER framework at Mines is appealing from several perspectives. First, it provides a potential solution (albeit with significant effort) to a problem that we were facing with no easy, cheap, or fast solutions (nor likely even two of the three).  Second, the path to success is via collaboration with Internet2 and other higher ed institutions – some more like us, others not so much like us – but who all share a common need and an interest in a community-built, open-source, IAM solution. Finally, it affords an opportunity to be an active participant in and contribute to the development of what will hopefully come a broadly adopted R&E solution.

There are days I acknowledge that going this route (vs implementing another vendor solution) is a gamble – but I'd like to think it is a calculated one. We’re not in this effort alone – that’s one great thing about the CSP approach. We’re in it with other institutions that have a stake in the success of TIER – and with access to and the support of an incredible group of architects and developers within I2 that are committed to the success of all of us. Internet2 has a history of creating an environment of facilitation, collaboration, and partnership that leads to the development of some indisputably key solutions for the national and international R&E community. I believe TIER has the potential to be another of those solutions.

I like to think we jumped in with our eyes wide open, but it’s impossible to see where all the landmines (or sharks?) may be hidden. We’ll certainly encounter those – but challenges, frustrations, and compromises exist in pretty much any solution. Continuing the metaphor of “jumping in”, I could say things are “going swimmingly”, and that we don’t appear to be “in over our head”, or even “Come on in! The water’s fine!” Or...  perhaps not.

We’re giving it our best – and we’ll let you know how it goes.




UMBC’s use of Shibboleth dates to the mid-2000s, when we ran Shibboleth Identity Provider version 1. Our first SAML integration went live circa 2007. We upgraded to IdP v2.0 (and SAML 2.0) in 2010, and IdP v3.0 in 2015.

UMBC has had some form of web single sign-on since 2000, when we launched a home-grown SSO service, called WebAuth, which functions similarly to CAS. Old habits die hard, and in fact, we’re still running the WebAuth service today. Several important web applications continue to rely on it, and it handles front-end authentication for the IdP (via the external authentication plugin). Our long-term goal is to move off WebAuth, and use the Shibboleth IdP exclusively for both authentication and authorization. However, that is not going to happen in the immediate future, so for now, we need to find a way for WebAuth to coexist with the TIER version of the Shibboleth IdP. Currently, they both reside on the same server, with Apache running the front-end AuthN system and proxying requests to the IdP using mod_proxy_http.

Why go to TIER in the first place? Well, it will be a big win for us operationally. Our current setup consists of three VMs behind a load balancer, each running identical configurations of the IdP and WebAuth. The IdP administrator (me) handles operational aspects of the identity provider, including configuration, customization, and upgrades. A separate unit within our division handles lower-level system administration of the VMs themselves, including patching, backups, and security incident response. In general, this division of responsibilities works well; however, there’s currently no mechanism in place for maintaining a consistent configuration across all three load-balanced nodes. Whenever I have to make a change (e.g. to add an attribute release rule, or load metadata for a new relying party) I have to manually propagate the change to each of the servers. It’s tedious and error-prone, and leads to inconsistencies. For example, if one of the VMs is down at the time I make the change, and later comes back up, it will have an older version of the IdP configuration until I manually intervene. While the system administration group has methods in place to facilitate replication, I’m not up to speed on the system they use; and conversely, they’re not familiar enough with the IdP to handle this on their end.

TIER, and the containerization model, promise to make things better for us. Having no real-world experience running Docker containers in production, we still have a significant learning curve ahead of us; however, I think switching from our existing system to a DevOps model will eventually pay dividends. Just to name a single example: replication will be a lot easier, as we’ll have a single “master” copy of the IdP configuration that we’ll use to generate as many running containers as we need, each behind a (yet-to-be-determined) load-balancing mechanism. Also, synchronization is no longer an issue, as older containers can just be spun down and replaced with new containers.

In my next entry, I’ll go into more detail about how we plan to migrate from our existing IdP configuration to a TIER DevOps model.

From University of Michigan

Before containerizing Grouper I thought I was a fairly-well seasoned identity and access management engineer.  (Because I’m a big guy my friends might say I’m well-marbled, but that’s another story.)

So when I approached the installation of containerized Grouper I thought I should be able to knock it out in a couple of weeks.

Boy, was I wrong!

I was completely new to containerization.  To further complicate matters, containerized Grouper had been created for use with Docker, yet the University of Michigan’s platform of choice for containerization is OpenShift.

Working with our local container gurus I had to get into the “container mindset”: nothing specific about the environment should be in the container itself.  Control everything through environment variables and secrets.  I also had to tease apart the differences between Docker and OpenShift.

It was maddening.

It took me a month to develop a process to “bake” into a container the stuff that the Docker compose functionality does automatically.

Once I finally built the images and deployed them to OpenShift successfully, I felt immense pride.  However, as pride does, it goeth before the fall.

Disappointingly, containerized Grouper still didn’t work.  I was under the misapprehension that once I deployed the images to OpenShift, Grouper would magically open up, much like the scene in the movie The Davinci Code when Robert Langdon (Tom Hanks) and Sophie Neveu (Audrey Tautou) enter the code to retrieve the cryptex.  Unlike them, I was left with disappointment, frustration, and sadness.

It turns out that simply running kompose convert (which I had stumbled upon, miraculously) and importing all the deployment configurations, routes, and services into OpenShift would not do the trick.  I had to get into the nitty-gritty of OpenShift’s routing and services architecture myself.

It was a cold January day when I finally configured the routes and services in some meaningful way and was able to retrieve the Grouper service provider’s metadata.  Progress!  And about an hour later, I was finally able to see the Grouper UI, albeit over an unencrypted connection.

To actually log into Grouper successfully, though, would take me another three weeks.  I eventually discovered that I had inadvertently shot my own foot, then hit it with a hammer a couple of times: when I had first started working on Grouper, I had modified the services.xml files in an inconsistent and absurd manner.

Once I edited them consistently I was finally able to log into Grouper!  Oh joy!  Oh bliss!

But never one to rest on my laurels I felt compelled to move forward.  Next: implement end-to-end SSL.  As it turns out, the solution to SSL was a checkbox and a pull-down menu.  To get to the correct combination of clicks, though, took another two weeks.

What’s next?  In the next two weeks I hope to have containerized Grouper pointing to our development LDAP and MySQL servers.

What have I learned from containerized Grouper so far?

  • Despite my advanced old age, I can still learn, albeit it seems a bit more slowly.

  • Do not take new technologies for granted.

  • Even though the technology may be new, there are probably still parts of it which function similarly to technologies with which I am very familiar.

  • Be patient.  Chunk what you hope to accomplish into meaningful spoonfuls so as to not get frustrated.


For the interested (or morbidly curious), I am putting together a run book of my travails.  It should be available soon.

In conclusion, may all your Grouper pods have a status of Active forevermore!

Repeal and Replace @ Mines

In 2011, Mines started on a project to replace an epic mess of shell, Perl, C, C++, Python, and a few dozen other odd tools that implemented Mines User Database or UDB for short.  In March of 2015, Mines migrated to vendor provided identity and access management solution.  The vendor solution had a number of useful features for both administrators and users, including self-service password management.  For several reasons, Mines is now faced with replacing its existing vendor solution. 

Mines joined InCommon and began utilizing Shibboleth in 2013 and watched with interest as the TIER project got started.  Mostly we were interested in Grouper.  During the spring / summer of 2017, a number of factors motivated the need to identify a new IAM solution, we were excited to hear about the new I2 Campus Success Program.

Over the past several months, we have been reading up on midPoint and developing a project plan to deploy both midPoint and Grouper.  There are quite a few differences between midPoint and the entity registry of the vendor solution. Over the next several months we will be describing those differences and how we intend to get around them.

Prior to our participation in the TIER Campus Success Program, the University of Illinois Identity and Access Management team had been working to deploy Grouper as a campus-wide authorization solution, branded locally as "Authorization Manager", or "AuthMan" for short. AuthMan was marketed on our campus as a solution for centrally managing authorization policies and quickly drew the interest of departments and service/application owners alike. In addition to work on Grouper, our Shibboleth IdP was slated to move to the cloud as part of a cloud-first initiative to migrate many of our central IT services to Amazon Web Services. Our team decided to shift gears with Grouper and deploy it in Amazon Web Services as well to align with this cloud-first initiative. This created a bit of a reset in the project, as we had a new deployment model and process.

In addition to deploying these two services in the cloud, we are also working on migrating our primary LDAP infrastructure to the Active Directory. The University of Illinois has many applications behind Shib, but our Shib IdPs have been using a secondary LDAP as its data source due to Active Directory constraints. Because of the cloud-first initiative and continuing work to consolidate redundant services, we're also working on making the AD our authoritative source of data for both Shib and Grouper. This requires some schema and policy updates to our AD to accommodate FERPA-suppressed users as well as making sure all necessary attributes are correctly mapped in AD.

Over the past month, we've been working on migrating our Shib IdPs to a dockerized TIER package hosted in Amazon Web Services, and have developed a plan to implement shadow attributes in AD to handle FERPA-suppressed users. We needed to resolve some logging dependencies with Shib, and we've used Splunk as the solution.  In addition, the AD schema changes are slated for early spring. We're also preparing a dockerized Grouper TIER image for deployment in Amazon Web Services. We are looking forward to working with peer institutions to help us "get over the hump" of learning Grouper and sharing our own experiences with deploying on AWS.