13 May 2016

Federated Authorization Using 3rd Party JWTs

Continuing on the theme of authorization from recent blogs, I've seen several emerging requirements for what you could describe as federated authorization using an offline assertion.  The offline component pertaining to the fact that the policy decision point (PDP), has no prior or post knowledge of the calling user.  All of the subject information and context are self contained in the PDP evaluation request. Eg a request that is using a JSON Web Token for example.

A common illustration could be where you have distinct domains or operational boundaries that exist between the assertion issuer and the protected resources. An example could be being able to post a tweet on Twitter with only your Facebook account, with no Twitter profile at all.

A neat feature of OpenAM, is the ability to perform policy decision actions without having prior knowledge of the subject, or in fact having the subject have a profile in the AM user store.  To do this requires a few neat steps.

Firstly let me create a resource type - for interest I'll make a non-URL based resource based on gaining access to a meeting room.

For my actions, I'll add in some activities you could perform within a meeting room...

Next step is to add in a policy set for my Meeting Room #1 and a policy to allow my External Users access to it.

My subjects tab for my policy is the first slight difference to a normal OpenAM policy.  Firstly my users who are accessing the meeting are external, so will not have a session or entry in the OpenAM profile store. So instead of looking for authenticated users, I switch to check for presented claims.  I add in 3 claims - one to check the issuer (obviously only trusted issuers are important to me....but at this step we're not verifying the issuer, that comes later..), the audience and a claim called Role.  Note the claims checks here are simply string comparators not wild cards and no signature checks have been done.

I next add in some actions that my external users can perform against my meeting room.  As managers, I add in the ability to order food, but they can't use the white board!

So far pretty simple.  However, there is one big thing we haven't done.  That is to verify the presented JWT.  The JWT should be signed by the 3rd party IDP in order to provide authenticity of the initial authentication.  For further info on JWT structure see RFE7519 -  but basically there are 3 components, a header, payload and signature.  The header contains algorithm and data structure information, the payload the user claims and the signature a crypto element.  This creates a base64 encoded dot-delimited payload.  However...we need to verify the JWT is from the issuer we trust.  To do this I create a scripted policy condition that verifies the signature.

This simply calls either a Groovy or JavaScript that I create in the OpenAM UI or upload over REST.

The script basically does a check to make sure a JWT is present in the REST PDP call, strips out the various components and creates a corresponding signature based on a shared secret.  If the reconstructed signature matches the submitted JWT signature we're in business.

The script calls in the ForgeRock JSON, JSE, JWS and JWT libraries that are already being used throughout the product, so we're not having to recreate anything new here.

To test the entire flow, you need to create a JWT with the appropriate claims from a 3rd party IDP. There are lots of online generators that can do this.  I used this one to build my JWT.

Note the selection of the algorithm and key.  The key is needed in the script on the AM side.

I can now take my newly minted JWT and make the appropriate REST call into OpenAM.

The call sends a request into ../json/policies?_action=evaluate with my payload of the resource I'm trying to access and my JWT (note this is currently submitted both within the subject.jwt attribute and also the environment map due to OPENAM-8893).  In order to make the call - remember my subject doesn't have a session within OpenAM - I create a service account called policyEvaluator that I use to call the REST endpoint with the appropriate privileges.

A successful call results in access to the meeting room, once my JWT has been verified correctly:

If the signature verification fails I am given an advice message:

Code for the policy script is available here.

NB - the appropriate classes and also the primitive byte[], need to be added the the Java white list for the policy engine, within the global configuration,

3 March 2016

In flight Authorization Management

Access request, or authorization management is far from new.  The classic use case is the use of a workflow process that, via approval, updates a profile or account with a persisted attribute/group/permission in a target system.  At run time, when a user attempts to perform an action on the target system, the system locally checks the profile of the user and looks for particular attributes that have been persisted.

A slight variation on this theme, is to provide a mechanism to alter (or at least request to alter) the persisted permissions at near run time.  An example of this, is to leverage OAuth2 and use of a tokeninfo endpoint that can convert access_token scope data into scope values, that are used by resource server to handle local authorization.  Dependent on the content of the scope values, the resource server could provide a route for those persisted entries to be updated - aka an access request.

In the above example, we have a standard OAuth2 client-server relationship on the right hand side - it just so happens we're also using the device flow pin and pair paradigm that is described here. Ultimately the TV application retrieves user data using OAuth2 - one of the attributes we send back to the TV, is an attribute called waterShedContent - this is a boolean value that governs whether the user can access post 9pm TV shows or not.  If the value is false, the TV player does not allow access - but does then provide a link into OpenIDM - which can trigger a workflow to request access.

Above flow goes something like this:

  1. User performs OAuth2 consent to allow the TV player access to certain profile attributes (0 is just the onboarding process for the TV via pin/pair for example)
  2. OpenAM retrieves static profile data such as the waterShedContent attribute and makes available via the ../tokeninfo end point accessible using the OAuth2 access_token
  3. Client interprets the data received from the ../tokeninfo endpoint to perform local authorization (if waterShedContent == true || false for example) providing a link into OpenIDM that can trigger an access request
  4. The BPMN workflow in IDM searches for an approver and assigns them a basic boolean gateway workflow - either allow or deny.  An allow triggers an openidm.patch that updates the necessary attribute that is then stored in OpenDJ
  5. The updated attribute is then made available via the ../tokeninfo endpoint again - perhaps via a refresh_token flow and the updated attribute is available in the client
Triggering a remote workflow (step 3) is pretty trivial - simply call /openidm/workflow/processinstance?_action=create with the necessary workflow you want to trigger.  To work out who to assign the workflow to, I leveraged the new relationship management feature of IDM and used the execution.setVariable('approver', approver) function within the workflow.  The approver was simply an attribute within my initial user object that I set when I created my managed/object.

The code for the PoC-level TV-player with the necessary OAuth2 and workflow request code is available here.

3 February 2016

Set Top Box Emulator and OAuth2 Device Flow

This is really an extension to a blog I did in October 2015 - Device Authorization using OAuth2 and OpenAM, with an application written in Node.js using the newly released OpenAM 13.0.

The basic flow hasn't really changed. Ultimately there is a client - the TV emulator - that communicates to OpenAM and the end user, with the end user also performing out-of-band operations via a device which has better UI capabilities - aka a tablet or laptop.

The app boots and initiates a request to OpenAM to get a unique user and device code, prompting the user to hit a specific URL on their tablet.

The user authenticates with the OpenAM resource server as necessary, enters the code and performs a consent dance to approve the request from the TV to be paired and retrieve data from the user's profile - in this case, overloading the postaladdress attribute in DJ to store favourite channel data.

In the meantime, the TV client is performing a polling operation - checking with the OpenAM authorization service, to see if the end user has entered the correct user_code and approved the request.  Once completed the TV retrieves a typical OAuth2 bearer payload, including the refresh_token and access_token values that can be used to retrieve the necessary attributes.

Future requests from the TV now no longer need to request password or authorization data.  By leveraging a long live refresh_token access can be managed centrally.

For more information on OAuth2 Device Flow see here.

1 December 2015

Scripted OpenID Connect Claims and Custom JWT Contents

OpenID Connect has been the cool cat on the JSON authorization cat walk for some time.  A powerful extension to the basic authorization flows in OAuth2, by adding in an id_token. The id_token is a JWT (JSON Web Token, pronounced 'jot' but you knew that) that is cryptographically signed and sometimes encrypted - depending on the contents.

The id_token is basically separate to the traditional access_token, containing details such as which authorization service issued the token, when the user or entity authenticated and when the token will expire.

OpenAM has supported implementations for OpenID Connect for a while, but a more recent feature is the ability to add scripting support to the returnable claims.  Adding scripting here, is a really powerful feature.  Scripts can be either Groovy or JavaScript based, with a default Groovy script coming with OpenAM 13 out of the box.

The script is basically allowing us to creatively map scopes into attribute data, either held on the user's identity profile, or perhaps dynamically created at run time via call outs or via applied logic.

A quick edit of the of the out of the box OIDC claims script, allows me to add a users status from their profile held in OpenDJ, into the data available to presented scopes.  I've used the inetuserstatus attribute simply as it's populated by design.  By adding "status" to the scopes list on my OIDC client profile, allows it to be called and then mapped via the script.
So pretty simply I can add in what is made available from the user's identity profile, which could include permissions attributes or group data for example.

Another neat feature (which isn't necessarily part of the OIDC spec), is the ability to add claims data directly into the id_token - instead of making the extra hop to the user_info endpoint with the returned access_token.  This is useful for scenarios where "offline" token introspection is needed, where an application, API, device or service, wants to perform local authorization decision making, by simply using the information provided in the id_token.  This could be quite common in the IoT world.

To add the claims data into the signed JWT id_token, you need to edit the global OIDC provider settings (Configuration | Global | OAuth2 Provider).  Under this tab, use the check box "Always return claims in ID Tokens"

Now, when I perform a standard request to the ../access_token endpoint, including my openid scope along with my scripted scope, I receive an id_token and access_token combination the same as normal.

So I can either call the ../user_info endpoint directly, with my access_token to check my scope values (including my newly added status one) or use a tool or piece of code to introspect my id_token.  The JWT.io website is a quite a cool tool to introspect the id_token by doing the decode and signing verification automatically online.  The resulting id_token introspect would look something like this:

Note the newly added "status" attribute is in the verified id_token.

23 October 2015

Device Authorization using OAuth2 and OpenAM

IoT and smart device style use cases, often require the need to authorize a device to act on behalf of a user.  A common example is things like smart TV's, home appliances or wearables, that are powerful enough to communicate over HTTPS, and will often access services and APIs on the end user's behalf.

How can that be done securely, without sharing credentials?  Well, OAuth2 can come to the rescue. Whilst not part of the ratified standard, many of the OAuth2 IETF drafts, describe how this could be acheived using what's known as the "Device Flow"  This flow leverages the same components of the other OAuth2 flows, with a few subtle differences.

Firstly, the device is generally not known to have a great UI, that can handle decent human interaction - such as logging in or authorizing a consent request.  So, the consenting aspect, needs to be handled on a different device, that does have standard UI capabilities.  The concept, is to have the device trigger a request, before passing the authorization process off to the end user on a different device - basically accessing a URL to "authorize and pair" the device.

From an OpenAM perspective, we create a standard OAuth2 (or OIDC) agent profile with the necessary client identifier and secret (or JWT config) with the necessary scope.  The device starts the process by send a POST request to /oauth2/device/code end point, with arguments such as the scope, client ID and nonce in the URL.  If the request is successful, the response is a JSON payload, with a verification URL, device_code and user_code payload.

The end user views the URL and code (or perhaps notified via email or app) and in a separate device, goes to the necessary URL to enter the code.

This triggers the standard OAuth2 consent screen - showing which scopes the device is trying to access.

Once approved, the end user dashboard in the OpenAM UI shows the authorization - which importantly can be revoked at any time by the end user to "detach" the device.

Once authorized, the device can then call the ../oauth2/device/token? endpoint with the necessary client credentials and device_code, to receive the access and refresh token payload - or OpenID Connect JWT token as well.

The device can then start accessing resources on the users behalf - until the user revokes the bearer token.

NB - this OAuth2 flow is only available in the nightly OpenAM 13.0 build.

DeviceEmulator code that tests the flows is available here.

18 August 2015

OpenIDM: Relationships as First Class Citizens

One of the most powerful concepts within OpenIDM, is the ability to create arbitrary managed objects on the fly, with hooks that can be triggered at various points of that managed objects life cycle.  A common use of managed objects is to separate logic, policy and operational control over a type of users.  However managed objects need not be just users, commonly they are devices, but they can also be used to manage relationship logic too - similar to how graph databases store relationship data separate to the entity being managed.

For example, think of the basic relationship between a parent and child (read this article I wrote on basic parent and child relationships). That article looked into the basic aspect of storing relationship data within the managed object itself - basically a pointer to some other object managed in OpenIDM.

That in itself is powerful, but doesn't always cover more complex many-to-many style relationships.

The above picture illustrates the concept of externalising the relationship component away from the parent and child objects.  Here we create a new managed object called "family".  This family basically has pointers to the appropriate parents and children that make up the family.  It provides a much more scalable architecture for building connections and relationships.

This model is fairly trivial to implement.  Firstly, create the 3 new managed object types via the UI - one for parent, one for child and one for family.  We can then look at adding in some hooks to link the objects together.  I'm only using onDelete and postCreate hooks but obviously more fine grained hooks can be used as necessary.

A basic creation flow looks like the following:

  1. Parent self-registers
  2. Via delegated admin, the parent then creates a child object (the creation of the child is done via an endpoint, in order to get access the context object to capture the id of the parent creating the child on the server side)
  3. After the child is created, the postCreate hook is triggered, which creates a family object that contains the parent and child pointers
  4. Once the family object is created, the family postCreate hook is triggered, to simply update the parent and child objects with the newly created family _id
That is a fairly simple flow.  The family object can be read just via the managed/family/_id endpoint and be manipulated in the same way as any other object.  A useful use case, is to look for approvers on the child - for example if the child wants to gain access to something, the family object can be looked up either for parallel approval from both parents or just via an explicit approver associated with the family.  

Another use case, could be when a new parent registers and wants to "join" a particular family - the approver on the family object can be sent an "access request" and approve as necessary.

The flow when a parent or child object is deleted is similar in it's simplicity:
  1. Parent (or child is deleted) perhaps via de-registeration or admin performing a delete
  2. The parent onDelete hook triggers, which finds the appropriate family object and removes the child or parent from the array of parents or children
When a family object is deleted, again the flow is simple to interpret:
  1. Family object is deleted (via an admin)
  2. All parents and children found within the family object, are looked up and deleted
This provides a clean de-provisioning aspect.

Obviously each managed object can also have it's own sync.json mappings and links to downstream systems the same as a non-relationship based deployment.

Another common set of use cases is around consumer single views.  For example, the relationship object becomes the consumer, with all other managed objects, such as preferences, marketing data, sales history, postal addresses and so on, all being linked in the same manner.  The consumer object becomes the single entry point into the data model.

12 August 2015

Explain it Like I'm 5: OAuth2 & UMA

This entry is the first in a mini-series, where I will attempt to explain some relatively complex terms and flows in the identity and access management space, in a way a 5 year could understand.


First up is OAuth2 and User Managed Access or UMA, a powerful federated authorization flow that sits on top of OAuth2.

Explain it like I’m 5: OAuth2

OAuth2 allows people to share data and things with other services that can access that data on their behalf.  For example, an individual might want to allow a photo printing service access to a few pictures from an album stored on a picture hosting service.

Explain it like I’m 5: Resource Server

The resource server is the service or application that holds the data or object that needs sharing.  For example, this could be the picture hosting site that stores the taken pictures.

Explain it like I’m 5: Resource Owner

The resource owner is the person who has the say on who can retrieve data from the resource server.  For example, this could be the user who took the pictures and uploaded them to the hosting service.

Explain it like I’m 5: Authorization Server

The authorization server is the security system that allows the resource owner to grant access to the data or objects stored on the resource server to the application or service.  In continuing the example of the picture hosting, it’s likely the hosting service itself would be the authorization server.

Explain it like I’m 5: Client

The client is the application that wants to gain access to the data on the resource server.  So in the continuing example, the the picture printing service would be the client.

Explain it like I’m 5: UMA

UMA allows the sharing of feed of data to multiple different 3rd parties, all from different places.  
For example, wanting to share pictures with not only 3rd party services to act on the resource owner’s behalf, but also to other trusted individuals, who can perhaps store those pictures in their store and print them using their own printing service selection.