Desktop Logins Summary


Previously we went over Private URI Scheme Coding Key Points and we will finish up with a summary of Desktop Logins.

My preference is to use Private URI Schemes for my company, though others may prefer the Loopback Interface option.

Desktop App Requirements

Let’s reiterate our Desktop OAuth Requirements and their status:

  • Easy deployment / management in Corporate Environments
  • Best Login Usability for End Users
  • Short Lived Access Tokens with Silent Token Renewal
  • Our solution must Pass Security Reviews
  • Avoid writing any Complex Security Code

Issues Solved

Private URI Schemes require only a single Redirect URI to be registered in Okta, and we don’t require any infrastructure on End User PCs.

Post login usability has a few minor issues, but we have done the best we can, and User Login is Easy – usually just a button click or two:

  • Browser features such as Password Autofill work nicely
  • Private URI Schemes look integrated and activate our app after login
  • We will live with the blank browser page after login

We had to write some slightly complex code, but we kept our concerns nicely separated so our code is in good shape and easy to change.

Okta and AppAuth-JS Integration

Overall I really like both of these components, but there are a couple of areas where they do not play well together:

  • When using the loopback interface in Okta you have to register every concrete redirect URL, which makes manageability difficult
  • AppAuth-JS does not return an id token so our Desktop UI cannot read User Info locally or do an Okta logout

Private / Loopback Pros and Cons

The below table lists my thoughts on pros and cons for the 2 types of desktop login, and companies can choose the option they prefer:

Factor Loopback Interface Private URI Schemes
Security Weaknesses None – and runs with low privilege None – and runs with low privilege
IT Permissions Should work under any Corporate IT Policy but you never know No risk of IT departments blocking logins
Instancing You can run multiple instances of our app if required You need to restrict your app to a single instance
Post Login Browser Display Can be customized but difficult to  avoid a confusing user experience Not customizable and user will see a blank page
Post Login Activation Desktop app stays in background Desktop app is notified by OS and brought to the foreground
Code Complexity More difficult than we’d like due to re-entrancy More difficult than we’d like due to re-entrancy
Electron Developer Setup Easy to debug logins More difficult to debug logins

Where Are We?

There were more issues with Desktop Apps than expected, but I’m satisfied that we’ve done a solid job and built Production Ready Solutions.

Next we will move onto Mobile Apps where In App Browsers will simplify some aspects of code, but there will be new challenges.

Next Steps

  • Next we will begin our first Mobile Code Sample
  • For a list of all blog posts see the Index Page

Final Desktop App – Coding Key Points


Previously we discussed our Final Desktop Sample, and showed how Private URI Schemes and Token Storage worked.

Next we will look at some code for our final Electron Desktop App, which is not too different to the First Desktop Sample’s Code.

C# Sample

If you are coding in C# there is an Identity Model Code Sample here that uses a Private URI Scheme:

Electron Main and Renderer Processes

Electron apps consist of two main areas:

  • The Main Process is responsible for setting up our HTML application
  • The Renderer Process runs our HTML application’s logic

The main process will be just an entry point, since its code is difficult to debug. All real OAuth processing will occur within our application logic.

Private URI Scheme Registration

In our main process we register our Private URI Scheme at application startup. This writes to User Specific areas of the operating system and does NOT require administrator privileges:

OS Response Notifications

The operating system sends the Authorization Response URL to our Electron App’s Main Process from which we immediately forward it to the Renderer Process.

On Windows, the OS sends us a startup command that includes the Response URL as a parameter. We avoid starting a new instance of our desktop app and instead notify the running instance:

Note that restricting the app to a single instance may not be possible in environments where you are not the host. An example might be an app that is invoked from Microsoft Excel and implemented as a plugin.

On Mac OS we receive an Open URL event instead:

Desktop App Response Notifications

Our desktop app receives the above message using the Electron RPC mechanism. I put this code into a small LoginNotifier class:

The role of this class is just to unpack response query parameters and resume the authorization flow:

Once done, all other code is pretty much identical to our first sample, though of course there is no longer a need for a loopback web server.

Resuming Logins Reliably

For our previous sample we discussed Reentrancy and Correlation and we have the same concerns when using Private URI Schemes.

Our Login Notifier class uses a Login State Map to reliably resume logins. Again we keep this code separated from main OAuth processing.

Productive Developer Setup

Electron has an annoying limitation that OS notifications for Private URI Schemes only work against a packaged application, which limits developer productivity and debugging.

The approach I would use for a productive local setup would be to support both types of login, and use the loopback option during local development:

Provide a facade for OAuth processing, so that the rest of your app does not need to be aware of details. You can then swap out the implementation based on runtime conditions – we’ll look at further examples in later posts.

Secure Token Storage

I created a very simple Token Storage class that uses the Node keytar component to load / save / delete tokens against the Account Name with which the user is logged onto their desktop PC:

Our Authenticator class calls the Token Storage class after the following events:

  • When the Desktop App Starts, to load tokens
  • After the Authorization Code Grant, to save tokens
  • After the Refresh Token Grant, to update tokens
  • When the user Logs Out, to remove tokens

Where Are We?

The main benefit of a Private URI Scheme is around deployment and administration concerns, and it is the desktop option I prefer.

Our Cross Platform Desktop App has also been updated to use Secure Token Storage for the platforms we deploy to.

Next Steps

Final Desktop App – Overview


Previously we walked through our Desktop App Coding Key Points. We have a good working solution but we will see if we can improve it further for the final sample.

Code Sample Features

We will implement 2 new features:

  • Logins will use a Private URI Scheme rather than a Loopback Interface
  • After login, tokens will be saved to Operating System Secure Storage

Secure Storage of Tokens

When we demonstrate mobile logins, we will store tokens in secure storage after login, so that the user does not need to login every time they start or reactivate the app.

We will follow this same behavior in a desktop app, though of course this is not part of the OAuth standards. You may prefer to just store tokens in memory after login.

Deployment Aspects

One problem with our first sample is that Okta requires us to register every possible URL for the loopback interface, which is tricky from a management viewpoint since we might want to support 100 or more ports.

Another concern is the possibility of Corporate IT Departments somehow preventing end users from running local loopback web servers.

Getting the Code Sample

The final sample is available here, consisting of a UI and API component. It can be downloaded / cloned to your local PC with this command:

  • git clone

Configuration Changes

Setup steps are almost identical to the First Desktop Code Sample, though the desktop app’s configuration is reduced since there are no ports or login completion pages:

In Okta, our Desktop App’s registration now only has a single Redirect URI for our Private URI Scheme:

Private URI Scheme Rollout

Our Desktop App will register the Private URI Scheme as a Per User Setting that does not require administrator actions. In corporate environments we must be vigilant about operations that require Administrator Privileges:

  • Most end users will NOT have administrator access to their local PC
  • Your Desktop App’s rollout may be managed by IT Administrators
  • Administrative setup should be done by your App’s Install Program
  • The Desktop App itself should usually run with normal / low privileges

Registered Private URI Scheme

On Windows, the Private URI Scheme is registered at the below per user registry location under HKEY_CURRENT_USER:

On Mac OS I installed the Default Apps Tool to view the scheme and the app it was registered to, which is also a per user setting:

Build Prerequisites

Our Desktop App will manage operating system secure storage via a lower level NodeJS component called keytar.

Although Javascript technologies are very productive for normal UI coding, it can be harder to integrate lower level components.

On Windows I needed to install the following components globally:

  • npm install –global node-gyp
  • npm install –global –production windows-build-tools

On my Fedora Linux system I needed to ensure that g++ was installed via the first command below, then to install some security libraries:

  • sudo dnf group install “C Development Tools and Libraries” -y
  • sudo yum install libsecret-devel

Preparing Token Storage

We can start running the final desktop sample by executing the following command:

  • npm run buildnative

This will run electron-rebuild to build keytar, which is partly written in C++ code that needs to link with operating system libraries.

Building the Code Sample

We can then build the final desktop sample by executing the following command:

  • npm run build

The first of these commands will be a little slower than previously, since it builds TypeScript code and then packages the app into the dist folder:

Running the Code Sample

We can run the app via npm start or by running it from the File Explorer, and the Desktop App’s behavior is the same as our previous sample:

Login Browser Prompts

After a login the user is prompted to open our desktop app, and this works in all of the main browsers. When the user consents to open the desktop app it is brought to the foreground.

Google Chrome



Internet Explorer


Login Re-Entrancy

Again it is possible that a busy user clicks Sign In, accidentally closes the browser tab, then needs to recover, and we will aim to prevent this causing errors where possible:

Post Login Display

After login the display looks poor, since the user is left with a blank page or a grayed out login page, and there is no way to customize this:

Having said that, our Loopback Interface Code Sample also had a post login display that looked pretty unnatural.

Token Storage

After login our tokens are stored and recall that our refresh token was configured to last for 12 hours.

During this period a user can close and re-open the app without needing to re-login, so that the first thing the user will see is your app:

Secure Token Storage

After login, our auth state is stored in Per User Secure Storage and you can only access this state by logging on to the PC as the user.

On Windows the keytar component saves our Desktop App’s tokens to the Windows Credentials Manager:

On Mac OS the keytar component saves our Desktop App’s tokens to the Keychain:

Where Are We?

Companies can adopt either the Loopback Interface or a Private URI Scheme, both of which are fine from a security viewpoint.

Next we will look at some code and then provide a summary of Desktop Logins, including Pros and Cons for the 2 technical options.

Next Steps

Desktop App – Coding Key Points


The Desktop App Technical Workflow explained how our Code Sample uses OAuth messages. Next let’s look at the code behind the workflow.

Coding Goals

We will summarize the coding goals for our Desktop Code Sample as follows:

  • Solid code with a clean Separation of Concerns
  • Reliable if End Users click Sign In more than once
  • We want to control Usability aspects where possible
  • Follow AppAuth-JS Standards so that we can easily take new releases
  • Aim to write only a Small Amount of technically simple code
  • Easy to Change to a Private URI Scheme later

TypeScript Technology

We will continue with TypeScript, since it gives us a very productive coding language. Our page flow is similar to that of our SPA, except that we no longer need to handle login responses as part of the page workflow.

C# Sample

If you are coding in C# there is an Identity Model Code Sample here that uses the Loopback Interface:

Desktop ‘Authenticator’ Class

Of course the great thing about Javascript technologies is that I should be able to just copy my earlier SPA to produce an equivalent Desktop App.

For our SPA we intentionally used an ‘authenticator‘ class to deal with the ‘authentication related stuff‘ of logins and tokens, and this is the main class that will need to change.

Other Custom Classes

I also created a number of other plumbing classes, in order to separate concerns, and we will cover the roles of some of these shortly.

Triggering a Login Prompt

As for our SPA, our Desktop App will trigger a login as a result of the UI not being able to call APIs, when there is no valid access token.

The HttpClient class and its 401 handling for making reliable API calls is precisely the same as for our SPA:

When we cannot get an access token we will throw a Login Required error that is handled specially:

Our error handler class will then move the user to the Sign In Required page, and the below router method records the user’s current hash location so that it can be restored after login.

The above mechanism copes with logins both at application startup and also when the user’s session expires:

  • If a user logs into our desktop app on Monday, then goes home, then accesses the running app again on Tuesday morning, the user is prompted to sign in again but does not experience any problems

The Authorization Redirect

We use AppAuth-JS classes when redirecting the user to login. This involves starting a loopback web server, creating an authorization handler and starting the authorization redirect:

Note that AppAuth-JS is a fairly new library and at the time of writing we need to write more authentication plumbing than we did for our SPA.

Custom Authorization Handler

AppAuth-JS provides the following concrete class for managing authorization redirects via the loopback interface:

It is also possible to override the base AuthorizationRequestHandler class, which I chose to do because I wanted to customize the following areas:

  • Behavior when the user clicks Sign In multiple times
  • Behavior of the Browser Display after login has completed

I called my class BrowserAuthorizationRequestHandler and followed the same AppAuth-JS pattern as the default NodeBasedHandler.

The Authorization Response

Our authenticator class follows the AppAuth-JS pattern of using a notifier to inform us when the authorization response has been received:

The Authorization Code Grant

Handling the login response mostly just covers the Authorization Code Grant message, which is easily implemented via the AppAuth TokenRequest classes.

The response is of type TokenResponse, and contains the token fields that we want to store:

In my code I have referred to this as the ‘authState‘ in line with a class from AppAuth mobile libraries which we’ll be using shortly.

Login Events

I followed the AppAuth-JS approach of using NodeJS events to signal login completion, which reduces code complexity.

The loopback web server raises an Authorization Response event to resume authorization processing, then consumes an Authorization Response Completed event to update the browser.

User Actions and Re-Entrancy

The main difficulty of coding Desktop Logins is that the browser is completely disconnected from the desktop app. We may have to handle user actions such as these:

  • A busy Corporate User clicks Sign In
  • The user accidentally closes the browser tab
  • The user retries by clicking Sign In again

Some developers may not need to handle re-entrancy, but others will need to get their solutions past QA departments or fussy UX / Business owners.

Loopback Web Server

To handle re-entrancy our loopback web server class uses static class members and only starts if it is not already started:

We avoid running multiple web servers, since we want to ensure that after a successful login we have cleaned up and are not leaking any resources.

Correlation when Resuming Logins

Our PKCE handling uses a simple CodeVerifier class, which correctly generates a different random challenge + verifier for every login request:

To handle re-entrancy and ensure that we resume the correct login, with the correct contextual data, the loopback web server looks up the LoginEvents Instance from the authorization request.

If this is not done then we can run into issues such as the Code Verifier for Sign In Attempt 1 being used for Sign In Attempt 2, leading to end user problems that we’d prefer to avoid. 

Login State Map

Login requests generate a random ‘state’ parameter that is also received in the response, and we can use this for correlation. Our LoginState class is a collection that maps the State to LoginEvents for each login attempt.

This is a little more complex than I’d like but gives us some extra reliability. I have kept this code separate to the main authentication processing, so that the Authenticator class focuses only on OAuth message handling.

Reliability is Not Perfect

It is probably logically impossible to solve all re-entrancy cases in a reliable manner. If the user clicks Sign In twice, then logs in twice, on the second attempt the web server will be stopped and the user will see this:

We will live with this, since the alternative is to keep a web server running indefinitely and leak resources. We’ve achieved our main goal of allowing users to retry, and we can explain the solution to QA and other stakeholders.

Login Handling is Complete

We have completed the difficult work and our Desktop App can now call APIs to get and render data, in precisely the same manner as for our SPA.

The Refresh Token Grant

After 30 minutes our access token will expire, and we will use the AppAuth TokenRequest classes for token renewal, then update our memory state:

The only interesting aspect is that we must check for a specific error code of ‘invalid_grant‘ in order to detect session expiry, at which point we just empty our tokens from memory.

Testing Token Expiry

Finally, our Authenticator class has some developer options to enable us to productively test token expiry.

We can’t force 30 minutes / 12 hours to pass to expire access / refresh tokens. However, we can cause OAuth messages to use expired behavior:

  • Corrupted tokens get sent to the Authorization Server
  • We get the same error responses as for real expired tokens
  • We can verify that our application handles them correctly

Where Are We?

We have a pretty complete desktop sample and hopefully can release to production without any issues.

The re-entrancy code is more complex than I’d like, and is a consequence of OAuth standards, which dictate that we have to use two disconnected UIs.

Next Steps

Desktop App – Technical Workflow


In How to Run the Basic Desktop Sample we got some OAuth basics working so let’s take a closer look at HTTP messages used by the sample.

You can follow the same process, using an HTTP debugger such as Fiddler or Charles, to look at the requests on your operating system.

Step 1. Desktop UI Reads Configuration

The Desktop App’s install program would deploy the configuration file to End User PCs, and read OAuth settings from it at runtime.

Note that some of these fields are nothing to do with OAuth Security but will be used for Deployment or Login User Experience reasons.

Step 2. Desktop UI tries to call an API during Page Load

The UI will realize it does not have an access token yet and will redirect to the Login Required page to initiate the process for getting one:

Step 3. Desktop UI downloads OAuth Metadata

When Sign In is clicked, the desktop app downloads Open Id Connect Metadata from Okta. The Desktop App will use these endpoints:

  • The Authorization Endpoint for User Logins
  • The Token Endpoint for Silent Token Renewal

Step 4. Desktop UI builds the Authorization Request URL

We are using the recommended flow for a desktop app, which is the Authorization Code Flow (PKCE):

The full redirect request as a URL looks like this:

This is quite a bit different from the Implicit Flow Redirect that we used for our SPA in an earlier post, with the following key differences.

Field Description
redirect_uri A Web URL served by the Desktop App itself
response_type The ‘code’ response type means the login result is an authorization code
scope A new ‘offline_access’ scope must be supplied in order to get a refresh token
code_challenge A random key generated differently for each authorization request
code_challenge_method The algorithm used to generate the random key must (currently) be SHA256

Step 5. Desktop UI Invokes System Browser

The Desktop UI gets a free port in our configured range (8001 – 8003) and listens on a Loopback URL for the Login Response. The System Browser is then launched with the above OAuth Redirect URL.

Step 6. Login is Processed

The Okta system receives the authorization request, then does Login Processing if required, to validate the User Name and Password.

The OAuth flow is then resumed and Okta issues a browser redirect to the Loopback URI, with the Authorization Code as a query parameter:

Step 7. Desktop App Receives Authorization Response

When the browser sends a request to, the Desktop App’s Loopback Web Server processes the HTTP request and must supply an HTTP response.

Step 8. Desktop App swaps Authorization Code for Tokens

Once the Desktop App has received the Authorization Code it immediately calls Okta’s Token Endpoint to swap the code for tokens.

That is, there are 2 steps to logins using the Authorization Code Flow:

  • Phase 1 is to receive the Authorization Response
  • Phase 2 is the above message, the Authorization Code Grant

There are a number of fields of interest in the above request message so let’s take a closer look:

Field Description
grant_type The token endpoint supports a number of different grant types
redirect_uri Verifies that the Authorization Code Grant is associated to the same app that issued the Authorization Request
code The authorization code being swapped
code_verifier Generated from the same random key as the earlier code_challenge field

Note that the Authorization Code itself cannot be used for anything confidential and has to be swapped for tokens before our UI can call APIs.

Step 9. Desktop App Redirects to a Company Web Page

To finish up browser processing, our Desktop App redirects the user to a simple Company Post Login Page, so that we can display a reasonably polished looking page to the user.

Of course this is not necessary from an OAuth viewpoint, but it is possible to customize what the System Browser shows after a login, and the UX people at your company may care about this.

Step 10. Authorization Codes and PKCE

An Authorization Code is a short lived (< 10 minutes) login result and each code can only be used for one Authorization Code Grant message.

For Native Apps there is potential for a malicious app to register the same Redirect URI,, and possibly intercept an Authorization Code intended for our Desktop App.

Therefore the OAuth flow for Native Apps uses a technique called Proof Key for Code Exchange (PKCE) to prevent this:

  • During the Authorization Request, the Authorization Server stores the Code Challenge mapped to the Authorization Code it issues
  • During the Authorization Code Grant, the Authorization Server gets the stored Code Challenge and checks it has produced the Code Verifier

This mechanism prevents malicious apps from being able to get tokens if they manage to intercept an Authorization Code.

Step 11. UI Stores Refresh Token

For our first desktop sample we are just storing tokens in memory. Note that we need to be careful about storing refresh tokens, since they are long lived credentials.

Step 12. UI calls API with Access Token

We can view access token requests from our Desktop App to our API in the same manner as for our SPA, by looking at the HTTP Authorization Header:

We can view the JWT details in an online viewer such as JWT.IO and, as for our SPA, we are using small confidential tokens containing just a User Id:

The profile and email scopes mean the access token can be sent to the User Info endpoint to get the User Name and Email for the logged in user.

Step 13. First API Call is to get User Info

AppAuth-JS libraries do not currently support returning Id Tokens to our UI, so we will use the technique from our first SPA Code Sample of Getting User Info via our API:

The UI receives the response and then renders details about who the logged in user is:

Step 14. API Validates Access Token

To see this request in an HTTP debugger such as Fiddler, restart the API with an HTTPS_PROXY environment variable:

  • On Windows run ‘npm run httpDebug’
  • On Mac OS or Linux run ‘sudo -E npm run httpDebug’

Our API validates incoming tokens by calling the Introspection Endpoint, and usually gets an Active response indicating that the token is valid:

Our API also calls the User Info Endpoint, then Caches the Token and User Info Details in Memory: and returns the User Info to the UI:

Step 15. API returns Personalized Data

Our API then returns its ‘corporate financial data‘ and of course the real reason we are using OAuth technologies is to protect our Corporate Assets .

In our simple example we are returning the same hard coded data for every user but a real Corporate API might apply Authorization Rules based on the User Identity, as discussed in This Earlier Post.

Step 16. Id Tokens and Native Apps

Single Page Applications use the Implicit Flow, where receiving an id token is highly recommended, since it protects against some types of token substitution attack.

The id token is less important for Desktop Apps, since only the Authorization Code could be substituted, and PKCE would prevent that from working, as mentioned in the below AppAuth iOS Post:

A hybrid flow that includes an Id Token as well as an Authorization Code is possible, by specifying the below value in the authorization request:

  • response_type = code id_token

Personally I would rather use the hybrid flow, so that:

  • My Desktop App concepts are similar to those for my SPAs
  • The UI can use extra Open Id Connect features that require id tokens
  • PEN testers may perceive the security to be more complete

Step 17. Access Token Expires

Every 30 minutes our access token will expire. Our UI allows us to roughly simulate this by clicking Expire Access Token followed by Refresh Data:

The ‘expire’ operation intentionally adds characters to the access token so that when we send it to our API, and the access token is forwarded to Okta, the introspection call fails:

The API response to the UI is a 401, and the UI needs to be prepared to handle this at any time, by getting a new access token and retrying the API call:

Step 18. Access Token is Refreshed

Our Desktop App automatically handles 401s by using the Refresh Token to get a new Access Token:

This is a standard OAuth message called the Refresh Token Grant. Note that refresh tokens are not JWTs and have a vendor specific format.

Some responses to the Refresh Token Grant may contain a new ‘rolling’ refresh token, which replaces the previous one. This can help limit the lifetime of stolen refresh tokens.

Note also that it is possible to logon with all possible scopes but to request only a subset of these scopes in the Refresh Token Grant:

This enables you to use different access tokens with different privileges, and perhaps to ‘run with least privilege‘ for the majority of API calls.

Step 19. Refresh Token Expires

We can also simulate the refresh token expiring after 12 hours in a similar manner to the access token expiring every 30 minutes. Our sample again does this by adding characters to the token.

Start by selecting a particular location within our Desktop App, then click Expire Refresh Token followed by Refresh Data:

When Refresh Data is clicked, our Refresh Token Grant will fail with an Invalid Grant. We need to treat this error code specially since it means ‘User Session Expired‘:

The user is then moved to the Sign In screen, since we need to invoke the System Browser and allow the user to retry:

When signing in after session expiry, we then restore the user’s location within our Desktop App:

Step 20. Refresh Token Revocation

In the event on an attacker somehow stealing a refresh token, and gaining long lasting access to your Corporate Assets, you may want a mechanism for an IT Administrator to centrally find and cancel a refresh token.

My trial version of Okta does not have an option to view refresh tokens against users, so let’s look at how Ping Federate manages this, so that you can think about how your company might revoke tokens:

  • The Authorization Server can store all refresh tokens in a database
  • The IT administrator can query tokens via user / application / time
  • The IT administrator can delete a database record to revoke the token

Note also that the database only stores a hash of the refresh token, so that it is not possible for a rogue IT administrator to steal and use one.

Step 21. User Explicitly Logs Out

In our previous post on Logout we discussed that:

  • A common reason for wanting a logout feature is to enable testers to log on as Different Users with Different Permissions to Corporate Assets
  • To do a full Open Id Connect Logout from Okta, and remove its Identity Provider cookie, requires us to capture the Id Token during login

Since we don’t capture an id token during login, our Desktop App will just do a ‘basic logout‘ that clears all tokens from memory and redirects the user to the Desktop App’s Sign In Page.

Step 22. OAuth Failures Handled

In our earlier SPA Code Sample we tested some Failure Scenarios to try to help ensure Fast Resolution of any Production OAuth Issues.

Fiddler is a great tool for manipulating OAuth requests, and I use it to test my error handling. As an interesting case, let’s look at PKCE verification.

For my Okta Authorization Server I can set a Fiddler breakpoint on the Token Endpoint via the following command:

  • bpu oauth2/default/v1/token

During the Authorization Code Grant we can then capture the request and edit the PKCE verifier value:

As expected, the request is then rejected, indicating that Okta is using the earlier received Code Challenge value to validate the Code Verifier.

We can then verify that our Desktop App is handling the failure in a controlled manner, by capturing fields from the OAuth error response:

We also want to avoid a misleading response in the browser, so we show a ‘Friendly Error Page‘ in this case. Again, you may not want to do this, but it is good to know what’s possible.

Where Are We?

We’ve used AppAuth-JS libraries to implement the Authorization Code Flow (PKCE) with Good Security and control over Usability, and we understand the technical messages. Next we’ll look at some of the code behind it.

Next Steps

Desktop App – How to Run the Code Sample


The Desktop Code Sample Overview Page describes what we will build, so let’s get it running. This section will repeat many of the same steps from the earlier SPA Code Sample Setup.

Step 1: Add Domains to your PC

First go to the hosts file on your local PC, which will exist at one of these locations:

OS Path
Windows c:\system32\drivers\etc\hosts
Mac OS / Linux /etc/hosts

Add entries as follows to represent our domains:


Step 2: Use Okta as an Authorization Server

Go to and sign up to get a developer account, after which you will receive an email to activate your account.

You will then get your own Authorization Server, which you can log into at  an Admin URL such as this:

Once logged in, browse to API / Authorization Servers and view the default item that has been automatically created:

The above URL is what we will use in our Code Sample, and in my case the value is

Step 3: Register OAuth Applications

In Okta we must add trust entries so that our applications can act as OAuth Clients and call the Authorization Server.

First register a Service Application for our API and note that the key OAuth settings are the client_id and client_secret:

Next register a Native Application and note that the key OAuth settings are the client_id and the redirect_uris:

About Loopback Redirect URIs

According to OAuth Standards for Native Apps, Authorization Servers should allow a Single Loopback URL such as to be registered, then accept redirect URLs from any port at runtime.

At the time of writing this seems to be a Future Okta Backlog Item so I had to register all possible URLs that we will use for our Desktop Code Sample.

Step 4: Configure Session Times

In Okta, go to API / Authorization Servers / Default / Access Policies / Default Policy Rule / Edit and set details as follows:

  • The user session is represented by the refresh token
  • Refresh tokens last for 12 hours
  • Access tokens are short lived and expire every 30 minutes
  • The refresh token will be used to silently renew access tokens

Step 5: Download Code from GitHub

The project is available here, consisting of a UI and API component. It can be downloaded / cloned to your local PC with this command:

  • git clone

Step 6: View Code in an IDE

I use Visual Studio Code on Windows, Mac OS and Fedora Linux, since it is cross platform and lightweight.

Select ‘Open Folder’ and browse to the AuthGuidance.DesktopSample1 folder, to view the code for both UI and API with syntax coloring:

Step 7: Update Configuration

Both the Desktop App and API have configuration files. In a real world app these would typically be delivered by your Continuous Delivery process.

For the Desktop App you will need to update the configuration to match your Authorization Server’s Base URL and OAuth Client Id:

Note that our Desktop Sample specifies the following details that differ from that of our SPA:

Setting Description
Loopback Port Range Corresponds to our Okta registered URIs
Login Success Page Where to send the browser after a successful login
Login Error Page Where to send the browser after a failed login
Offline Access Scope A scope needed to get refresh tokens

For the API you will need to update the configuration to match your OAuth Server Base URL and the API’s Client Id + Client Secret:

Step 8: Install Node JS if Required

Go to the Node JS Web Site and run the installer for your operating system.

Step 9: Install Desktop and API Dependencies

From both the desktopapp and api folders run ‘npm install’, which will download dependencies, including Electron:

Step 10: Configure SSL for the Code Sample

Next install the below root certificate used by the API to your operating system, as described in Root Certificate Deployment.

Step 11: Start the API

From the basicapi folder run ‘npm start’ to run the API, which will listen on port 443. If other system components are listening on port 443, such as IIS or Apache, stop them temporarily.

Note that on Windows you may need to use an administrator user and on Mac OS or Linux you will need to run this command:

  • sudo -E npm start

Browse to the following URL to get data and verify that you get an Unauthorized response:

In production you would run your API as a low privilege user, and there are various ways to enable this. Our objective though is just to enable real world HTTPS traffic on a developer PC.

Use Alternative Ports if you prefer

My preference is to use the Standard SSL Port 443 allowed by company firewalls. If you prefer to use an alternative port such as 3000, change the host names in Okta and the above configuration files:


Step 12: Build the Desktop App

From the desktopapp folder run ‘npm run build’ to build the Electron App:

The TypeScript code will be transpiled to Javascript and output to a built folder that is referenced in the index.html file:

Step 13: Run the Desktop App

From the desktopapp folder run ‘npm start’ to run the Electron App:

This will invoke the Electron executable due to the following command in the package.json file:

About Electron Executables

When running ‘npm start‘ we can use operating system tools to see that a couple of instances of the Electron executable are running, which point to our application files on disk:

We would build the Electron app into an executable before deploying to real users and we can do this with ‘npm run pack‘:

You can then run the type of deployed executable that would be delivered to your end users:

Step 14. Get Node JS SSL Working

You are likely to receive SSL errors when the Desktop App calls the API, due to untrusted root certificates. As a quick fix you can edit package.json for the API and Desktop App to disable Node SSL errors:


The API’s package.json will then look like this:

The Desktop App’s package.json will then look like this:

Once you are up and running, a less hacky option is to Add Root Certificates to Node JS via the NODE_EXTRA_CA_CERTS environment variable.

Step 15: Run a Desktop App Login

The first Electron screen we will see is the login screen. We need to show this every time we redirect the user to the system browser to log in:

When Sign In is clicked we give the user some visual progress in the Desktop App:

The System Browser is invoked and the user prompted to login to Okta. Note that browser features such as Password Autofill work nicely:

However, rather than logging in, close the browser on the first attempt. We will ensure that our Desktop App copes with this resiliently.

Next click Sign In again, and login to the second browser instance. After login the System Browser is redirected to a custom company web page:

The Electron App then receives the login result, gets tokens and can securely interact with its API and render the resulting data.

Step 16: Run a User Session

We can then use the UI to simulate session related events that may occur in your desktop apps. We will drill into these operations in the next blog post:

  • Access token expiry
  • Refresh token expiry
  • Explicit user logout

Note that our Desktop App now has a Long Lived Credential and we need to think about how our UI will securely deal with it.

At the time of writing the Desktop App only stores tokens in memory, so that if we use the Electron reload options in the default menu, tokens are lost and the user has to log in again:

Where Are We?

We have used our existing SPA to quickly build a Desktop App using Electron, which runs on all the main platforms.

We used the AppAuth-JS library to implement the Recommended OAuth Flow for a Desktop App – and zero changes were needed to our API’s code.

For the next post we will drill into OAuth messages for the Desktop Sample, which includes use of Refresh Tokens and PKCE Handling.

Next Steps

Desktop Code Sample Overview


Previously we defined our Mobile App Requirements, but first we will implement a Desktop Code Sample, since it is tricky, but has value for many companies.

Desktop OAuth Requirements

Let’s adapt our mobile requirements slightly for the Desktop case:

  • Easy to deploy / manage in Corporate Environments
  • Best Login Usability for End Users
  • Short Lived Access Tokens with Silent Token Renewal
  • Our solution must Pass Security Reviews
  • We want to avoid writing any Complex Security Code

OAuth Desktop Apps Background

Identity Model have some good technical resources for desktop apps, and the below video is a good introduction to best practice:


The Desktop Code Sample will get the below components and endpoints talking to each other, in the same manner as for our earlier SPA Sample:

Desktop App Technology

We will use Cross Platform Javascript Technologies and implement an Electron App (note that Visual Studio Code itself is cross platform and built with Electron).

The most important aspect is understanding what we want to code. Once we have clarified our requirements we could deliver them in any desktop technology, such as WPF Apps built in C#.

AppAuth Security Library

We will use the Google AppAuth-JS Library to implement security aspects. We would expect this library to evolve in future and get many new features for free.

Technically we will use the Authorization Code Flow with PKCE Handling, and we will look at what this involves later.

We will follow the same technical approach as the AppAuth-JS Basic Sample but will go further to implement a more complete Corporate Desktop App.

Other Languages?

My samples use Javascript based technologies, but my main goal is guidance, and I hope you can easily understand concepts and implement them in any programming language.

If you are using C# I would recommend the Certified OIDC Client 2 Library, which reduces Login and Token handling to a few lines of code:

Desktop App Corporate Theme

As discussed in This Earlier Post, our app’s theme is meant to simulate use of Monetary Corporate Assets that need securing.

Our Desktop App will have exactly the same (trivially simple) functionality as our SPA, with navigation between two pages.

Runs on Multiple Platforms

Our Desktop App will run on Windows:

Our Desktop App will also run on Mac OS:

Our Desktop App can also be run on Linux if needed:

Many companies use a mix of Windows and Mac OS desktops these days, so supporting both may be what your customers prefer.

We’ll Avoid Web View Logins

A few years ago it was standard for Desktop Apps to do logins via a web view. For example, in a C# application you might write code like this:

The Login User Experience then used a popup window hosting a web view browser control, which looked integrated from a UX viewpoint:

The popup window would then capture the login response data from the web view using an ‘Out of Browser URL‘ such as urn:ietf:wg:oauth:2.0:oob.

Although the above code works, it always had usability problems, since in effect the web view runs a private browser session:

  • Cookies that enable Single Sign On across apps are not remembered
  • Browser features such as Password Autofill also do not work

We’ll Follow Security Best Practice

Part of the best practice for Native Apps is for the screen where credentials are handled (such as password values) to be external to the app.

Authorization Servers are recommended to look at the User Agent and block login requests from web views. Google logins already do this:

This Auth0 Article covers the current Desktop Apps best practice – in particular you must use the User’s System Browser to handle logins.

Which Authorization Response Handling Option?

If we look at the OAuth Standards for Native Apps, there are 3 potential redirect options for receiving the login result in a desktop app:

Usage is summarized below, and we will explore both options that work for a desktop app, to see which has fewest overall issues:

Option Example Redirect URI Used By Prerequisites
Private URI Scheme com.mycompany:/desktopapp Mobile or desktop apps A custom URI scheme must be configured when the app is installed
Claimed HTTPS Scheme Mobile apps iOS and Android have support for claiming HTTPS URLs and invoking a registered mobile app
Loopback Interface Desktop apps End User PCs must be able to run a local web server and listen on a loopback URL

The AppAuth Code Sample

We know we have to use the System Browser, so let’s see this working in the AppAuth Electron Sample. I used the following commands:

  • git clone
  • cd appauth-js-electron-sample
  • npm install
  • npm run compile
  • npm start

Note that on my Windows PC I had to edit the package.json start command as follows for this to work:

We then see a desktop app that looks like this, and we can click Sign In to invoke an OAuth login on the system browser:

When Sign In is clicked the System Browser is opened, in a new instance of the default browser (or a new tab if the browser is already running).

The user may already be logged in, perhaps for our SPA, but if not then the user will be prompted to login:

The Desktop App supplies a ‘loopback’ value of as the OAuth Redirect URI, and the desktop app listens on this URL for a login result.

When the login completes, the browser returns to the below URL and the desktop app receives the response details. The desktop app then swaps the returned Authorization Code for tokens, after which it can call APIs.

This works OK from a technical viewpoint, but for most companies it may have a few problems, discussed next.

Potential Problems for Company Desktop Apps

Problem 1: If a busy user accidentally closes the Login Tab in the System Browser, we need to make sure the user can retry.

Problem 2: After login the display may not please Business + UX stakeholders, who are likely to want to show more polished output.

Problem 3: You need to ensure that listening on the loopback URL would not be blocked by Corporate IT Policies of your customers.

We will do some work to mitigate these risks, since for a Real Corporate App we would need to get past all blocking issues, satisfy QA and Business / UX stakeholders, and release to production.

Login UX Ideas from Third Party Apps

We will borrow some Login User Experience ideas from existing third party apps, which deal with some of the above problems.

The Instagram Desktop App appears to run in the browser but is a desktop app with additional access to the user’s file system:

During login the following screen is shown, so that the user can always retry if the login on the system browser is accidentally dismissed:

After login on the system browser, the user is left in the Instagram web app rather than on a blank browser page:

GitHub Desktop allows sign in via the System Browser:

It uses a Private URI Scheme so that the System Browser can invoke the Desktop App and send it the Authorization Code when a login completes:

The Private URI Scheme is registered by the GitHub install program, so that x-github-client maps to an executable path that the OS knows how to invoke.

On the usability side of things, the Private URI Scheme looks a little more integrated and automatically brings the desktop app to the foreground, though the user is left with a blank page after login.

Where Are We?

On the security side we know that we will do logins on the System Browser, using AppAuth-JS libraries and the Authorization Code Flow (PKCE).

We still have some open questions, but we will start with a Code Sample that uses the Loopback Interface, and aim to deliver good ReliabilityUsability.

Next Steps

Mobile App High Level Requirements


So far our Code Samples demonstrate a completed SPA and API architecture. Our last post discussed Coding Key Points for our .Net Core API.

We will now start our second theme, focusing on our main company requirements for Native Apps, which covers both of the following:

  • Mobile Apps which run on iOS and Android devices
  • Desktop Apps which are installed on end user PCs

Standard OAuth Flow for Native Apps

If we return to the diagram from the Auth0 web site we can see that we should be using the Authorization Code Flow including PKCE handling:

OAuth 2.0 and Open Id Connect messages are identical for Mobile Apps and Desktop Apps, but there are some other differences which we will cover.

Goal: Pass App Store Approval

Before you can release a new Mobile App to App Stores it will need to undergo a security review from Apple (iOS) and Google (Android). If your Mobile Login handling does not follow guidelines it may be rejected.

Login UX for Native Apps

The recommendation for Mobile Apps is to avoid logins on Web Views and to use newer In App Browsers instead, which overlay the Mobile App’s UI:

Similarly, Desktop Apps are meant to use the System Browser for User Logins, and getting the best UX / technical balance is tricky.

Goal: Best Usability and Password Management

Use of In App Browsers is good from a security viewpoint, and it also provides an integrated user experience, which companies will want for their end users.

Use of passwords on a mobile device is painful due to small keyboards. We want to prevent asking the user to login too often.

On mobile devices we should therefore ensure that browser features such as Remember Password work in the standard way.

Goal: Use Third Party Security Libraries

As for SPAs, we do not want to write any security code ourselves. Instead we will use the respected Google AppAuth libraries, which are recommended by both Apple and Google – and we hope these will help us pass App Approval.

Use of these libraries will also help ensure that we minimize security mistakes, and get us on a good path where we can take library updates for free when new security features are implemented.

UI and API Interaction

Our API Architecture is already complete and will not need to change when we implement mobile and desktop apps in addition to SPAs:

  • Each type of UI will provide access tokens when calling APIs
  • APIs will treat calls from each kind of UI in the same way

Once again we want to use Short Lived Access Tokens (good security), along with Longer Lasting User Sessions (good usability).

Online Examples?

One challenge when implementing OAuth for Native Apps is that currently there are very few third party apps that use the recommended standards:

  • Most desktop apps do not use the System Browser for logins
  • Most mobile apps do not use In App Browsers for logins

This can make it difficult to convince your stakeholders, who may have a Business or UX Focus, that your proposal for logins is right.

Our Mobile Requirements

Here is a summary of this blog’s requirements, which are similar to those for Single Page Applications:

  • Pass Apple and Google Approval so that we can release to App Stores
  • Best Login Usability for End Users
  • Short Lived Access Tokens with Silent Token Renewal
  • Our solution must Pass Security Reviews
  • We want to avoid writing any Complex Security Code

Where Are We?

We know what we want for Mobile Apps but we will demonstrate the OAuth technology for Cross Platform Desktop Apps first:

  • Partly because development and testing is easier on a full screen PC
  • Partly because OAuth for desktop apps has its own challenges

Next Steps

.Net Core API – Key Coding Points


Previously we provided an Overview of our .Net Core API to describe the setup and API behavior.

Next we will look at some key C# coding points, which enabled us to produce equivalent behavior to our earlier Node JS API Code.

Web API Code Overview

Our API code aims to demonstrate how to get the tricky OAuth plumbing out of the way so that you can focus on growing your company’s API logic.

Our API’s ‘business logic‘ is a trivial controller that returns hard coded data from JSON text files. We are treating this as ‘sensitive corporate data‘ that must be protected via OAuth access tokens.

Loading Custom Configuration

Our Web API uses custom JSON configuration, and this would be delivered by your Continuous Delivery process. If required you can combine standard Microsoft sections with your own custom ones.

Our sample provides a couple of extension methods for the IConfiguration interface, to load our custom data points into objects:

Creating the Web Host Listener

We create the web host to listen over SSL, based on the details in the configuration file.

If we ignore OAuth handling, the rest of our API startup is boiler plate ASP.Net Core configuration that you can read about in many online articles.

Note that our sample API also serves up our sample SPA’s static content, and of course a real API would not do this:

JWT Bearer API Authentication

Microsoft Authentication Middleware works by calling AddAuthentication and then selecting a Authentication Method to say how it will work.

Meanwhile an Authorization Filter indicates which controller operations need to be secured by the authentication method:

As can be seen the most commonly used out of the box option for OAuth Secured APIs is JWT Bearer In Memory Token Validation.

This may work fine for your company but this blog prefers an alternative solution due to the Design Aspects we covered earlier.

Configuring Identity Model Authentication

For our API we will therefore use an alternative authentication method of Identity Model Token Introspection. This method will also be commonly used by companies who use Reference Tokens rather than JWTs.

The full Identity Model configuration is shown below, and uses the OAuth settings from our custom configuration. We also indicate that we will cache introspection results until the token expiry time.

A few points of particular interest:

  • We use the Microsoft Memory cache for Claims Caching
  • To introspect JWTs we need to unset SkipTokensWithDots
  • We added a helper class to enable HTTPS Proxy Debugging

Configuring Logging

Microsoft logging works in terms of a ‘logger per class’, and as an example the Identity Model introspection handler creates its logger as follows:

Our Sample API creates its own loggers and also applies a Logging Filter to limit output to just the areas we are interested in:

API Controllers and Authorization

Once Identity Model processing has completed, validated requests are assigned a Claims Principal for use within your API controllers:

Your business logic can then easily access claims in its API logic, to authorize requests for Corporate Assets.

It Just Works …

And that is pretty much all we need to do to implement our Preferred API Architecture. The running API with logging is shown below:

I like the design of the new Microsoft Web API stack, and the Identity Model library integrates with it very easily.

API Unhandled Exceptions

To finish up we will ensure that we are handling errors in a solid manner. First we added a simple Unhandled Exception Handler to catch any errors in our business logic:

We can simulate an error by changing our repository class to use an invalid JSON file location. First we see that the exception is caught and logged:

Next we see that a controlled error is returned to the UI for display:

API Authentication Errors – Default Behavior

One thing I did not like about the Microsoft stack is that any error during authentication processing is returned as an empty 401 response, and there is no opportunity to log the error or customize the response.

The empty 401 is not too bad if the reason for the failure is a missing or invalid access token, but the error could be a permanent failure:

  • API is misconfigured and Okta introspection always fails
  • API is unable to connect to Okta and authentication always fails

In these cases, the response we are returning to our caller is not consumer friendly and may lead to problems such as redirect loops.

API Authentication Errors – Custom Behavior

To implement my preferred behavior I had to resort to a minor code hack, since it seems there is currently no easy way to extend Microsoft’s authentication error handling.

In the above statement I replaced the Microsoft AuthenticationMiddleware class with my own CustomAuthenticationMiddleware class. The custom implementation just calls registered handlers then handles errors.

I can now return more consumer friendly errors, which I would expect to enable my company to resolving any production issues more quickly.

Of course, the above code ensures that for any type of authentication error we avoid moving onto subsequent middleware, deny access and avoid returning any sensitive information to the caller.

To see the new behavior, we can configure the API to use a proxy when calling Okta, but also ensure that Fiddler or Charles are NOT running:

Our API will now experience a connection error that is logged, which we hope will help us to understand incidents and quickly resolve them:

Our API consumers now get responses they can work with, as for our SPA, whose display references the details we logged server side:

Where Are We?

The .Net Core Web API stack is easy to use and we made a good library choice in Identity Model, so it has been easy to deliver our API Architecture.

We had to spend a little time to get our technical foundations and reliability in good shape. We can now focus on growing our API’s business logic.

Next Steps

.Net Core API – Overview


So far our code has been entirely Javascript based. Our last post discussed In Memory Access Token Validation for APIs when using Azure Access Tokens.

Next we will port our Node JS API to .Net Core and C#, since many companies use non Javascript technologies for their APIs.

This Blog’s API Requirements

Although the technology has changed, our API Goals have not. Previously we carefully discussed API Design Aspects and met these requirements:

  • Externalize Token Validation from our API code
  • Cope automatically with Token Signing Key renewal
  • Collect User Data Points from the Token and Central + Product Data
  • Use Claims Caching of the above data points to make our API efficient
  • Solid Error Handling for our OAuth 2.0 Secured API
  • A Productive Developer Setup

Obviously your company may prefer a different solution, but I hope the points I’ve raised make you aware of some important considerations.

Solution Overview

Recall that the first time an access token is received by our API it will be processed according to the following pattern to collect claims:

Claims will then be cached until the token’s expiry time so that the claims handling for all subsequent API calls is a fast cache lookup:

Third Party Security Library

We will use the Identity Model OAuth 2.o Introspection Library, which will perform the validation and claims management for us.

Note that when using C# we need to ensure that the caching is thread safe, so we are using a tested library rather than writing this code ourselves.

Downloading the Code Sample

Download code from GitHub via this command. The sample uses SSL so that we can also cover any .Net Core specific SSL issues:

  • git clone

Build Prerequisites

First you need to download and install the .Net Core SDK which can be installed on Windows, Mac OS or Linux:

Developer Setup

To get fully set up I would recommend first following the Initial SPA Code Sample Setup:

  • Ignore step 9  since our API no longer uses Node JS
  • Register https:// URLs for our SPA and API rather than http:// URLs

Next you will need to configure SSL Browser Trust for our Company Root SSL Certificate.

View Code in an IDE

When you open a C# file in Visual Studio Code it will prompt you to install the C# extension, which provides editor features such as Intellisense:

Update API Configuration

Our API configuration now looks like this, and you will need to update the 3 OAuth fields to match your Okta setup.

Build the API

As for our earlier samples, we will run the API from the command line. Instead of npm commands we will now use dotnet commands:

  • The API was created with ‘dotnet new webapi
  • We use ‘dotnet build’ to get dependencies and build our code

I’m impressed with the leanness of the new Microsoft tooling. In particular the default project templates and CSPROJ files reference only a handful of dependencies:

Run the API

We can then run the API as an administrator user via one of these commands, and it will listen on the standard firewall friendly SSL port:

  • On Windows use dotnet run as an administrative user
  • On Mac OS or Linux use sudo dotnet run

In production you would run your API as a low privilege user, and there are various ways to enable this. Our objective though is just to enable real world HTTPS traffic on a developer PC.

Testing our API

We can test our API in isolation by typing a data URL into the browser without a token, after which we get the expected 401 response.

A better method of course is to run our SPA, which behaves identically to previous samples and calls the API after login:

Our .Net Core API implements equivalent logging to our earlier Node JS API. Introspection and data lookup only occurs when a token is first received. On subsequent API calls our API claims are retrieved from a memory cache:

Viewing API Requests to Okta

To view SSL requests from our API to Okta we now need to update our API configuration to indicate useProxy = true:

I can now run a tool such as Charles to view the introspection requests during development:

If you run into problems where the proxy is not capturing requests, see our earlier Proxy Configuration section.

Handling Introspection Errors

Finally, let’s intentionally cause an API error, as for our Node JS application, by setting invalid Introspection Client Details in our API configuration:

This results in an error when the Identity Model Library calls Okta:

Our API logs full error details, and creates an Error ID of 38595. In a real API this information would be persisted to a database or log files.

Our API returns a 500 response to the UI and a JSON error object so that the UI can handle the error in a controlled manner.

The UI then renders the error object, which enables Technical Support Staff to quickly look up details for ID=38595 from the API’s logs:

Where Are We?

We have a working code sample that demonstrates our Preferred API Architecture and it works equivalently to our Node JS version.

We are using a good security library that does the heavy lifting for us, and we’ve been able to easily implement debugging, error handling and logging.

Next Steps

In Memory Token Validation


Previously we covered migrating our SPA Code Sample to Azure AD. This article drills into Open ID Connect features that can be used when validating access tokens in memory.

First we will do some Manual Access Token Signature Validation using online tools and then we will look at the equivalent code.

View the the Access Token’s Key Identifier

Run this blog’s Azure Code Sample or your own application and use an HTTP debugger to get an Access Token. Then use the JWT.IO Page, copy in the token and get the value of the ‘kid‘ parameter in the JWT header:

Initially the token is seen to have an Invalid Signature. This is because we need to provide the Token Signing Public Key in the below text box:

If there is a nonce field in the JWT’s header then it is intended only for Microsoft developed Azure APIs . These tokens require special handling and will always fail standards based validation.

Run a Metadata Lookup and get the JWKS Endpoint

For my Azure AD Tenant, the Open Id Connect metadata URL is as follows:

We can see that the endpoint for downloading token signing (JWKS) keys is at the following location:

Download Token Signing Keys

Next get JWKS keys from the endpoint we located, and find the x5c value that matches the access tokens’ kid value:

For Azure AD the x5c value is the public key of the Microsoft Token Signing Certificate.

Save the Key in Certificate (PEM) Format

This involves simply surrounding the key with these well known lines:

If we save the text to file we can view it using the operating system viewer:

Paste the Key into JWT.IO

On the website, paste the above long key into the Verify Signature text box and we should now see that the Token’s Signature is Verified.

In my usage the Microsoft certificate was not from a Trusted Issuer, and this could potentially cause signature verification issues, depending on which Third Party Security Library you use.

Full Access Token Validation

As we covered in Step 12 of our Original Code Sample Workflow, the overall technical validation of received access tokens requires these checks:

Property Expectation
Issuer The token is from Azure AD and from our tenant
Audience The token was issued for our API
Digital Signature The token has not been tampered with since issued
Active The token has not expired

Once complete, your API may also want to apply extra security checks, such as checking for a recognized calling application. Your code would then move on to applying business level authorization rules.

API Token Validation Code

In my API’s Authenticator Class I wrote the validation code as follows, which follows the same steps as the above manual validation.

At runtime I first decode the JWT and read its Key Identifier, and we then need to get the corresponding Token Signing Public Key.

I used a third party security library called jwks-rsa to do this, since getting the key requires some low level security knowledge. Most of us would make mistakes if we tried to write this code ourselves:

I used another library called jsonwebtoken to do the actual validation, and called it as follows, supplying the expected issuer and audience. Note that our API gets the Issuer Id from downloaded metadata:

We can easily test the error cases of supplying invalid values for issuer and audience. We can also supply an expired access token, in which case the library throws an exception and our API logs the returned error:

Since the above operation is expensive I continued to use the Claims Caching technique presented earlier, so that it only occurs when a server receives a new token for the first time, rather than on every single API call.

Other Technology Stacks

My sample API is in Node JS, but once you understand concepts you should be able to code it in any technology. Note that the JWT.IO web site points to recommended libraries for multiple technology stacks:

Here is some online code that may be useful to some of you:

Where Are We?

We have covered a variation where our API used In Memory Access Token Validation, though I would have preferred to externalize this code.

Next we will cover a different variation, where the API is implemented in a non Javascript server side language.

Next Steps

Azure AD SPA Code Sample


Previously we configured an Azure Active Directory Setup for our API and SPA. Next we will get our SPA Code Sample updated to work with Azure AD.

Developer Setup

Running the sample on a Developer PC requires the Original Setup Steps, though of course we do not need to configure Okta since we are using Azure AD instead.

Download code from GitHub via this command. The sample uses SSL URLs for the SPA and API, as required by Azure AD.

  • git clone

Make sure you also import the below Root Certificate into your Certificate Store, as covered in Developer SSL Setup, to prevent browser warnings:

API and SPA Configuration Changes

We have updated our API to point to the Azure AD Authorization Server. Also, because we are dealing with in memory validation, we must configure an expected audience for access tokens that matches our API’s ‘App ID URI‘.

Our SPA also points to Azure AD, and the other change is a new ‘resource‘ parameter that Azure AD requires us to supply during login redirects:

OIDC Client Id Token Validation

After login, OIDC Client needs to validate the id token issued by Azure AD, and by default OIDC Client will attempt to download token signing keys.

However, this breaks due to Azure AD not allowing the CORS request – and there is a similar problem if the UI calls the User Info endpoint:

This is annoying, but we will work around it by downloading token signing keys using a double hop, via our API:

It turns out also that User Info does not need to be downloaded, since Azure AD adds user name and email fields to the Id Token and OIDC Client can read it from there.

SPA Code Changes

The changes to our Authenticator class are shown below:

  • We first make an Ajax call to the API to get Token Signing Keys
  • We pass the result into our Authenticator class
  • User Info Download is disabled
  • An extra ‘resource’ query parameter is passed in from the SPA config

API Code Changes

The main change to our API is that it no longer uses Introspection to validate access tokens and instead uses In Memory Token Validation.

This topic is quite interesting in terms of developer understanding, so I have covered it in a separate post.

SPA Execution

If we now run our API with ‘npm run httpDebug‘ and our SPA with ‘npm start‘, we can view key changes to HTTPS messages in an HTTP debugger.

When our SPA loads, and before the login redirect, there is a double hop to download token signing keys. This API request does not need to be secured.

The login redirect now has an extra, vendor specific, query parameter, to identify the API it wants an access token for:

The login uses the Microsoft Login Page and you will log in with your Azure AD Credentials:

On return from login, the Implicit Flow token validation in OIDC Client completes successfully, using the token signing keys we provided.

The SPA then securely calls the API with the Azure AD token. The API strictly validates the access token, then returns data:

Our SPA uses OIDC Client to read Profile and Email information from the Id Token, then displays the User Name of the logged in user:

The features for Token Renewal and Basic Logout also work OK. Therefore we have met all of our usability requirements.

Azure AD Id Token

First let’s look at the Id Token returned to the UI in a JWT Viewer. This token is proof of the authentication event and has the SPA as its audience.

Note that in Azure AD, the token always includes the user’s Azure profile and email information, even if I only specify scope=openid.

Azure Access Token

The Access Token is intended for calling our API, so it has a different audience value. It also contains User Profile and Email information.

User Ids in Tokens

As covered in previous posts, a key capability is for an API is often to extract the User Id and use it to apply business logic.

My Azure AD User Id can be viewed against my User in the Portal, under Azure Active Directory / All Users and is this value:

Azure AD tokens seem to include the Azure User Id in the oid claim as opposed to the sub claim. So APIs will need to use the oid claim to apply personalized authorization and data access.

Token and User Session Lifetimes

My Azure account is a trial one so I am using default times for Access Tokens and User Sessions, which are not too far out of line with our requirements:

  • Access Tokens last for 60 minutes
  • User Sessions last for 24 hours

Real world companies will use Azure AD B2C and be able to configure Custom Token and Session Lifetimes via the Azure Portal.

Where Are We?

We have got past all blocking issues for our SPA and API, though it was a little more difficult than we’d like.

Next Steps

Azure Active Directory Setup


In our previous post we covered a Developer Local SSL Setup. Next we will look at some variations, starting with Windows Azure.

About Azure Active Directory

There are 2 main reasons why a company will use Azure Active Directory and this post is focused on only the first of these:

  • Use Azure AD as a Standards Based Authorization Server
  • Use Microsoft Vendor Extensions (such as Office 365 Azure APIs)

Features we Require

We have implemented our core features already, in a Standards Based manner, so we hope they will work in Azure AD without many changes:

  • Users must be able to login to the SPA using the Implicit Flow
  • Azure AD must issue access tokens and id tokens to our SPA
  • Our SPA must be able to do Implicit Flow token validation
  • Our API must be able to Validate Access Tokens
  • Our SPA must be able to use Short Lived Access Tokens
  • Our SPA must be able to do Silent Token Renewal
  • Our UI must be able to get User Info
  • Users must be able to do a Basic Logout

Azure Sign Up

I used the Azure Free Trial offer and signed up for a developer account. You may have a more complete Company Setup, including Azure B2C features.

Portal for Azure AD Applications

The main portal is at and this is where we will register applications, since these OAuth endpoints have the best standards support.

Azure AD Authorization Server

In the main Azure Portal, identify your Tenant Id, which is a value such as the following:

The Authorization Server Base URL will then be of the following form, and we can view metadata by browsing  to /.well-known/openid-configuration:


Azure AD Standards Limitations

We will run into the following Azure AD standards limitations, and we will work around them:

  • There is no API token introspection endpoint for validating tokens
  • JWKS and UserInfo endpoints are not callable from a browser

Configure a User Profile

Under Azure Active Directory / Users and Groups, select your user and then select the below Profile option:

Next fill in the First Name and Last Name fields, and also scroll down and fill in the Email Address:

Registering our API in Azure AD

Under Azure Active Directory / App Registrations, select New Application Registration and add a BasicAPI entry:

Make a note of the following App ID URI value, which will be used as the Audience for Access Tokens later.

Registering our SPA in Azure AD

Next register a Basic SPA, and enter its URL, which will be our SPA’s OAuth Redirect URI. Note also that we will use the Application Id as our OAuth Client Id:

Next select the Manifest option and enable the Implicit Flow, by changing the default false value to true:

Finally, under All Settings / Required Permissions, select Add / Select an API, then search for our API and add delegated permissions to it:

Azure AD 2.0 Endpoints

Note that Azure AD also has some newer endpoints at the below base URL, You can use them in a similar manner, starting with the metadata URL:


If you want to try to use the newer endpoints you must register the API and SPA on a different portal at

I started with the above endpoints but was unable to get my standards based requirements working, especially in the area of OAuth scopes:

  • I wanted to use standard Open Id Connect scopes ‘openid profile email‘ so that OIDC Client functionality worked in my Javascript UI
  • I wanted the SPA to be able to send the Access Token to my (custom) API, and to implement API token validation
  • There is no User Info endpoint so I wanted to call the Microsoft Graph API to get the User Profile, then display the User Name in the UI

To achieve the above I wanted the Access Token to have a custom scope access_as_user and also a Microsoft Graph scope User.Read.

However, I did not find usage intuitive, ran into many cryptic errors and came to the conclusion that what I wanted was not supported. The older endpoints seem to have better Standards Based options for the time being.

Where Are We?

We have an Azure setup and next we will next update our Code Sample. It will work a little differently due to Azure limitations and vendor specific functionality, but we hope to meet the same Company Requirements.

Next Steps

Developer SSL Setup


We have Reviewed our SPA Solution against requirements. Before getting into Mobile Tech we will switch to an SSL / TLS local developer setup.

As part of this I also want to introduce some useful cross platform tools that can improve our troubleshooting capabilities.

X509 Certificate Pipeline

It is quite common for companies to have a Continuous Delivery pipeline something like the following, and to issue different types of SSL Certificate:

Environment Type of X509 Certificate
Development None / Self Signed
QA Company Issued
Staging Third Party Issued
Production Third Party Issued

Production Certificates are issued by a Root Authority such as Verisign, and cost money, so cheaper certificates may be used for Development and Test Environments.

The Self Signed option is usually the most practical option for developers, since it does not require money or a dependency on IT departments.

Why SSL for Developers?

Some Authorization Servers, such as Azure Active Directory, will require us to use HTTPS URLs for our SPA, so when working with OAuth technologies, SSL is part of the fabric.

More importantly though, we want to see real world traffic from browser and mobile clients to our API, so that we can identify any SSL issues early in the pipeline, in order to save costs.

SPA Code Sample with SSL

A 4th web sample that runs under SSL is available here, and can be cloned in a similar manner to previous samples:

  • git clone

Code Sample SSL Settings

The updated code sample now runs over an SSL port and defaults to 443, since my preference is to use the standard ‘firewall friendly’ port.

If required, change the configuration by just editing the config files. You can use a custom port such as if you prefer:

You will also need to change Okta URLs under Applications / Basic SPA and API / Trusted Origins to use an https value for the web domain:

Generated Certificate Files

The sample includes some Certificate Files that can be used directly, and also some scripts for regenerating them with different parameters if required:

We will use the below certificates on a Developer PC so that the setup is closer to a production setup (though obviously it is not secure to use self signed certificates like this for production servers).

Certificate File Represents Root Certificate Public Key
mycompany.ssl.pem SSL Certificate Public Key
mycompany.ssl.key SSL Certificate Private Key

We have created our own Root Certification Authority, which is just used to issue our SSL certificate:

The SSL Certificate we have created is a ‘Wildcard Certificate‘ which will be used by our Node Server to handle both Web and API requests, so it is valid for both domains:

API Usage of our SSL Certificate

When the sample runs, the API reads the Certificate from disk and uses it to listen on an SSL connection. Reading files from disk seems to be standard NodeJS practice, since it does not use certificate stores:

Browser Trust of our SSL Root Certificate

In order to prevent browser warnings, our Root Certificate public key must be trusted. On Windows we do this by importing its PEM file to the Trusted Root Certification Authorities for the Local Computer store:

On Mac OS we need to import the PEM file into the Keychain Access application under System / Certificates, then set the certificate to the Always Trust level:

On my Fedora Linux system I use the Firefox browser and added the Root CA to the Firefox store, under Preferences/ Advanced / Certificates / View Certificates / Authorities / Import:

Running the Sample over SSL

We can now run our sample over SSL and verify that there are no browser errors or warnings with our SSL setup:

The Google Chrome browser has some useful SSL tools under Main Menu / More Tools / Developer Tools / Security which allows us to view SSL strength details:

Note that Chrome requires the browser’s exact domain name to be present in one of the SSL Certificate’s Subject Alternative Names.

SSL Debugging – Charles Settings

On Windows it is common to use the Fiddler tool to look at HTTPS Traffic. However, the Charles HTTP Debugger is Cross Platform and we will use it later for Mobile Development, so let’s take a closer look.

After installing Charles, export the Charles Root Certificate and then follow the above Browser Trust process again, so that you do not experience any browser warnings when debugging.

Next we need to tell Charles which SSL domains it should decrypt requests for, under the Proxy / SSL Proxying Settings menu item:

Operating System Proxy Settings

On Mac OS we need to edit the network connection in order to proxy requests via Charles, and we will use this for mobile HTTPS debugging later:

On my Fedora Linux system I needed to set Firefox Proxy details under Preferences / Network Proxy in order to capture browser requests:

For other types of request, such as those from APIs or desktop apps, I needed to browse to use Fedora’s (Gnome) Settings tool, then navigate to Network / Network Proxy:

Capturing our SSL Requests

Now that the Operating System setup is done, we can view OAuth messages used for our SPA’s Implicit Flow:

We can also capture requests from our API to Okta, if we start our API via npm run httpDebug. This sets an HTTPS_PROXY environment variable to, which is the address on which Charles listens.

However, we will find that we now run into SSL trust issues, since NodeJS will not use the Browser Trust we configured.

Disabling Root Certificate Checks

A quick and dirty hack to get API SSL debugging working is to completely disable SSL root checks by setting this environment variable:


Although this will get SSL debugging working for API requests to Okta, it is clearly not a good solution.

Certificate Root Replacement

Tools such as Charles and Fiddler work by replacing the root of SSL addresses you browse to:

Some corporate networks will use a similar technique for Outbound SSL Firewall Filters:

Tools for Testing SSL Connections

I use the following two tools to troubleshoot SSL connections. Both of these are preinstalled on Mac OS and Linux, whereas for Windows you can download EXEs from the below links:

  • curl allows us to easily send a standalone HTTPS request
  • openssl allows us to easily troubleshoot SSL connects

The below commands are useful for troubleshooting SSL server trust issues from a client viewpoint:

  • curl
  • openssl s_client -connect

By default, when debugging API introspection requests you will get SSL trust warnings as below, due to self signed certificates in the trust chain:

White Listing Trusted Root CAs

On my work PC I created a Certificate Bundle File containing the following root certificate public keys:

  • Our Company Root
  • The Charles Debugger’s Root
  • The Fiddler Debugger’s Root
  • The Corporate Firewall Filter Root

I then saved these to a Certificate Bundle File that needs to be in the following PEM format:

If you use Windows tools to export certificates, they may be in DER format, in which case you will need to first translate to PEM format as follows:

  • openssl x509 -inform der -in -out

Re-Testing our SSL Connections

We can now re-run our troubleshooting commands, specifying an additional Root CA file to trust, to fix the Certificate Trust problem:

  • curl –cacert
  • openssl s_client -CAfile -connect

In addition, we can capture requests in Charles / Fiddler by setting an HTTPS_PROXY environment variable if required:

Making the Whitelist Permanent

One option is to specify the path to trusted roots in the package.json file, using the NODE_EXTRA_CA_CERTS environment variable:

In this case however, a permanent Environment Variable for my development user worked better, so that it applies to all projects:

Note that on Mac OS and Linux, we set the environment variable in our .bash_profile file. When we run our API we then use sudo -E option to preserve the environment:

On Fedora Linux I configured root trust by copying our file to /etc/pki/ca-trust/source/anchors, then running sudo update-ca-trust.

Final SSL Debugging

We now have a fully working SSL developer setup where we can capture both SPA and API requests. Our setup is production like and does not involve any code hacks:

Where Are We?

SSL issues can be a little painful but, if we take control of technical tools, we empower ourselves, resolve more infrastructure issues early and are less likely to run into unexpected production issues later.

Next Steps

Goal 1 – SPA Review


In the Final SPA Coding Key Points we completed our SPA and API Code Sample, so let’s have a review of the technology and our goals.

OAuth Standards

In our SPA Requirements, we had a big Implicit Flow concern around Token Renewal, since we wanted to use Short Lived Access Tokens.

However, OIDC Client and Okta provide us with a good solution, so we have no Blocking Issues in this area.

SPA Security Library

The OIDC Client library works very nicely and has been a great help, by doing the heavy lifting for us.

It has some known issues, but also an active community, so we are on a good path where we will get future improvements for free:

API Security Library

In our NodeJS Server, the OpenId-Client has also been useful, and reduces the number of lines of code we need to write.

Note that an alternative SPA security option is to have a small server side and to implement the Authorization Code Flow for Server Side Apps. If using this approach with NodeJS then the above library is a great choice.

Okta Authorization Server

Okta is a Standards Based Authorization Server, and plays very nicely with the SPA and API security libraries we have chosen. We have not found any blocking issues yet with Okta.

I have also tested the code samples on Ping Federate and found no blocking issues, so our API and SPA code seems portable.

Other Authorization Servers?

We will shortly also run our sample against Azure Active Directory, and later against Google’s Authorization Server, to see how well it works with large vendor options.

Code Simplicity

I am pleased with the way our code turned out. I was able to meet my Coding Goals of being Modular, Technically Simple, Business Focused and Reliable, and the Javascript technology is a lot better than I expected:

We needed to write some simple plumbing in order to implement Reliable API Calls and to provide Supportable Errors. In a real application, we would now forget about OAuth and focus on growing the business logic.

Web Performance

We can easily move our SPA hosting to a Content Delivery Network, which was one of our big goals.

Our bundle sizes are bigger than we’d like after I have produced minified versions via ‘npm run build’. This is not ideal, but we can live with it:

I would expect this to be reduced over time though, perhaps partly as a result of crypto support being added to modern browsers:

Security Reviews and PEN Tests

We are using the Recommended OAuth Flow for an SPA, it uses Short Lived Access Tokens and is not subject to Cookie Vulnerabilities. We therefore expect our SPA to perform well in this area.

Least Privilege Issue

One Implicit Flow security issue I can think of is that we do not have the ability to use different access tokens with different privileges.

In the Authorization Code Flow, the user can log in and get a Refresh Token containing all privileges needed for the User Session:

The UI can then get Access Tokens with different privileges, and could use a Low Privilege for most operations:

This is an awkward coding model, and it is probably true that most Web UIs which use Access Tokens today are not coded like this. Instead it is most common for a UI to use same token for all CRUD operations.

For now I do not consider this a blocking issue or one that makes a major difference to security, but it is worth being aware of it as a limitation.

Authentication Strategies

If in future we need to make code changes in order to support a different vendor, we can do so by changing just our Authenticator classes. No changes are needed to other areas, such as the below HttpClient class.

Where Are We?

Our SPA and API solution has all essential End User features and it has not been too difficult from a coding viewpoint, since we made good technical choices. 

Next Steps

Final SPA – Coding Key Points


In our last post we provided a Final SPA Overview. We will look at the key technical points below.

TypeScript Update

For the Final SPA Code Sample I updated from ES6 to TypeScript, which provides improved language features, tooling and compiler checks. Our SPA still gets transpiled and continues to work fine in Internet Explorer:

Another key difference is that I am using async / await features for callback handling, since our code then reads better than with ES6 promises. In particular out HttpClient class is now easier to read.

Open Id Connect Flow

The main change is to use response_type = ‘token id_token’, which is the recommended flow, and how the UserManager class should be used:

The token renewal iframe is spun up by the OIDC Client library, and needs a page to execute on. Since we are a Single Page App we use the main page.

Reading Okta User Info

When tokens are received on our callback URL, they are processed for us by the OIDC Client.  The information returned from the User Info Endpoint, such as User Name and Email fields, can then be read via the profile object:

Token Renewal Code

We need to change our Login Response handling to handle requests on the renewal iframe as well as on the main window:

When handling Token Renewal errors, login_required means the Okta session cookie has expired. We ignore this response and our SPA’s 401 handling ensures that the user is redirected to login after the next API call.

When running the iframe on the main page of your SPA, you will need to short circuit the SPA Page Workflow when Silent Renewal completes.


Logout code is a one liner to trigger the Open Id Connect logout message on the main window:

I also implemented a primitive ‘router’ class. One of its responsibilities is to show a Logged Out view upon return to our SPA:

TypeScript SPA Build Changes

In our SPA the webpack configuration has changed, since we are now loading and building TypeScript files.

We are using core-js for polyfilling, which is a simpler TypeScript based approach, but the concepts are the same as previously:

TypeScript API Build Changes

In our API we are using ts-node to launch our server.ts file, whereas previously we were using node to launch our server.js file. Nodemon now monitors files with a ts extension and restarts the API when they change.

TypeScript Code Quality Checks

Both the SPA and API have a tsconfig.json file, which is used when webpackts-node invoke the TypeScript compiler. We have enabled strict mode so that we receive code quality warnings from the compiler and IDE:

One consequence of this is that we must add Type Definitions for Third Party Javascript Code we use, via commands such as this:

  • npm install @types/jquery –save-dev

The npm packages have been simplified since we no longer require all of the Babel dependencies:

Where Are We?

We have completed essential features and in particular we have achieved the following essential behaviour for our Corporate SPA:

  • Our SPA uses Short Lived (30 minute) Access Tokens
  • End Users only get redirected to Okta once every 12 hours

Next Steps

Final SPA – Overview


Previously we decided to update to the correct Open Id Connect flow for an SPA and use Id Tokens. We will now complete the Usability features, which are primarily related to Session Management.

Getting the Latest Code

The project is available here and can be be downloaded / cloned to your local PC with this command:

  • git clone

How to Run the Final Sample

The only difference to the Setup Instructions for the previous code sample, is to run ‘npm install’ and ‘npm start’ from new folder locations in steps 8-10:

SPA Logging Option

It is useful to be have a hidden option to look at OIDC log details for released products, in case we need to troubleshoot.

In our sample we will do this via an optional log=info query parameter, or we can specify a different level if needed: none / error / warn / debug.

User Loaded‘ is output whenever we call ‘Refresh Data‘, and indicates that the OIDC Client token details are being loaded from session storage.

Open Id Connect Logins

The redirect message now includes a nonce parameter, which can potentially mitigate replay attacks of previous valid login responses:

The response includes an id token, and we know that tokens we have received were issued to our SPA from our Authorization Server:

As discussed in the previous post, once this message is processed we can say the user is authenticated to our SPA.

Token Validation

Upon return from login there is now considerable extra Javascript processing to validate the Id Token (JWT) and the hash of the Access Token, which we can view if we use log=debug:

First the token signing keys are downloaded from an Okta URL retrieved from metadata:

For the final sample we have changed the UI to get User Info directly via OIDC Client features, which is the only User Info we need in our Basic SPA:

Token Renewal

Token renewal occurs silently behind the scenes when the Access Token is less than a minute from expiring.

I’ve added a UI Test Option to ‘expire’ a token by updating the HTML 5 storage details, so that we can invoke silent renewal on demand.

If we click it, then within a few seconds, the UserManager class will spin up an iframe and silently renew our token:

The HTTP redirect uses prompt=none, as discussed previously:

Most requests come back with a new access token, because the below Okta specific session cookie is sent on renewal requests:

We can then click Refresh Data and the main window will call the API with the new access token. That is, the End User’s session has been seamlessly extended by 30 minutes, without impacting usability.

Eventually the session cookie will expire and a login_required response will be returned during silent renewal:

If there is an error during silent iframe token renewal, our UI handles it and displays the error response on the main window:

To drill deeper into OIDC we can use log=debug and see that our access token starts with 30 minutes (1800 seconds) until expiry and ‘expiring’ + ‘expired’ timers fire every 5 seconds:

When the ‘expiring’ time counts down to zero, the token is 60 seconds from expiring, and a silent renewal is triggered:

OIDC token renewal occurs in the background rather than during End User time, which is good because we want to avoid Poor Usability if it takes 2 seconds due to a slow network.

I’m impressed with the above solution, since iframe token renewal is not easy to code reliably. We have made a good library choice in OIDC, and the result is that we get a mature tested solution for free.

Basic Logout

We can click the Logout button to generate an Okta logout request of this form:

When processed, an HTTP debugger such as Fiddler will show that the session cookie is removed. The OIDC Client also removes tokens from storage:

Our SPA returns to a Logged Out location within our SPA. We can then click ‘Home’ to load the List View again and trigger a new API request + login.

If we didn’t implement a Logged Out view then the user would be returned to the Okta login page. However, it can be useful to ensure that our SPA flow copes with unsecured views as well as those that call APIs.

Where Are We?

All essential End User features for our SPA and API have now been coded, and the OAuth code should not need to change much in future.

The Login User Experience is limited to logins with Okta passwords. As part of a future Federation goal we will look into improving that.

Next Steps

Id Tokens and User Authentication


Previously we discussed Logout and discovered that it may only work if we received an Id Token during Login.

Requesting an Id Token during Login

In the Final SPA Code Sample we will update to use the recommended Open Id Connect flow:

  • response_type = token id_token

The Id Token is an extra JWT returned in the response. This token is private to our UI and is not used by APIs:

The Id Token may also contain Basic User Info but, as we’ve discussed, the UI may also need to call its API to get more detailed User Data.

Token Validation

Before accepting tokens on its redirect URI, OIDC Client fully validates the id token and checks that it was issued for this SPA. The access token is not read, but its hash is checked against the at_hash value.

Users are now Authenticated

We can now make this statement, since the id token is Proof of the User Authentication Event to our particular SPA, based on cryptographic verification.

Logout now Works

When we now do an Open Id Connect logout, the OIDC Client sends the id token in the URL and Okta will accept the request and remove session cookies:

Attackers and the Redirect URI

The above validation mechanism ensures that the UI only uses access tokens issued to itself. This gives us extra protection against token substitution attacks:

I’m not convinced that the above is a major issue for most Corporate Apps, but it feels right that our UI validates received tokens.

Security Reviews and PEN Tests

In some corporate sectors you will also need to think about third party perceptions. If our app is missing Open Id Connect security features it is likely to be flagged as having vulnerabilities.

SPA Code Complexity

The one downside of using id tokens in an SPA is extra complexity, where we require the ‘jsrsasign’ library to do cryptographic token validation. 

At least we are not writing the code ourselves though – we have externalized it to a Respected 3rd Party Library written by Security Experts.

Other Security Libraries and Id Tokens

As you can probably tell, in my early usage of OIDC Client, I was considering avoiding id tokens, due to the above complexity.

Partly this is because I have used Google AppAuth Mobile libraries in the past, and they do not fully support id token validation at the time of writing:

Use of id tokens in the mobile case is a little less important though, because token substitution attacks are mitigated via PKCE handling, which we will look at in a later post.

Where Are We?

We can now say that our SPA authenticates its users and we have some extra security features.

There is more complexity in the browser code, and the Javascript download size is greater, but these do not feel like blocking issues.

Next Steps



Previously we discussed how we will implement Token Renewal for our SPA and next we will discuss Logout requirements.

End Users and Logout

Most real users don’t care much about logout features, and are happy to just close their browser and login again when they prompted.

Testing as Different Users

Perhaps the most common reason why companies may want some type of logout functionality is just to support testing:

  • It can be useful to be able to log in to your SPA as different test users with different permissions to corporate assets

Persistent Login Cookies

You Authorization Server or Identity Provider may use persistent cookies, in particular when you run your SPA on a mobile browser.

In this case, closing the UI may not log you out, and you may want to avoid having to ask test users to clear browser cookies.

LDAP Logouts?

For cases when users sign in automatically with LDAP credentials, the User Experience acts as though there is no login to your SPA:

In these cases it probably does not make sense to log out from the Identity Provider. You may choose to hide the Logout button, or just have a ‘Simple Logout’ that removes stored tokens.

Single Logout

Open Id Connect supports a more advanced type of logout, where logging out of one app can also raise an event to other apps. I don’t consider this an essential feature though, so we will not implement it.

The OIDC Client has a Session Monitor class that implements the Single Logout feature, if this is a feature you care about. 

Basic Logout to Enable Testing

For our Code Sample we will implement Basic Open Id Connect Logout from Okta, without SLO, via an Open Id Connect logout redirect:

After logout the user will be returned to the Login Screen, or, as above, we can supply a Post Logout Redirect URI to return the user to an SPA view.

Open Id Connect Logout

If we try to do an OIDC Client logout with our current code sample then the logout attempt will fail due to a missing id_token_hint query parameter:

Vendors, Libraries and Logout

Open Id Connect Logout is only a draft standard, so Authorization Servers and Security Libraries may not implement it yet and some Authorization Servers will have a vendor specific solution.

Where Are We?

We have worked out the basic support for Logout we want, to enable testing. However, we need to fix the above Id Token problem.

Next Steps

User Sessions and Token Renewal


In our last post we explained an Improved Code Sample with UI and API Claims Handling, but we are still missing essential features.

User Session Times for Your Company?

We will usually want to separate the User Session Time from the Access Token Lifetime, so that behavior is good from both Usability and Security viewpoints. This blog will use the following values:

UI Type User Session Time Access Token Lifetime
Web UI 12 hours 30 minutes
Mobile UI 1 week 30 minutes

Okta Settings

In Okta we can set times such as the above under API / Authorization Servers / Default / Access Policies / Default Policy Rule:

SPA Token Renewal can be a Blocking Issue

In our SPA Requirements we pointed out that there is no standard Implicit Flow solution for silently renewing Access Tokens.

Currently the user’s browser gets redirected to get a new access token every 30 minutes. From a usability viewpoint this is not Production Ready.

Implicit Flow and Session Cookies

Fortunately, Authorization Servers commonly issue Session Cookies that enable Silent Access Token Renewal.

In Okta the Session Cookie time is configured against our application, under Applications / Basic SPA / Sign On / Add Rule:

Token Renewal Mechanism

One option might be to make an Ajax request to the OAuth Authorization Endpoint, which would send the session cookie and get back a token:

However, browser CORS restrictions will prevent an Ajax client from reading Response Location headers or following the 302 redirect.

It turns out that the only option that works is to spin up a hidden iframe and redirect the user, so that the browser itself follows redirects.

IFrame Redirects

During the OAuth redirect we can set a ‘prompt=none‘ query parameter to ensure that the renewal request never gets routed to a Login Page:

The response will either be a new access token or a ‘login_required‘ error. The latter response indicates that the user’s session has expired, and that a new user login on the main window is needed:

Same Origin Restrictions

If your renewal redirect does not use prompt=none, you will sometimes be routed to log in. Almost all Login Pages will block requests on an iframe due to an X-Frame-Options header, which is best practice for vendor software:

IFrame Redirects and Code Complexity

We now have a solution for our Session Management requirements. It is easy to set the iframe redirect URL but these aspects remain difficult:

  • We need to manage iframe page execution in our SPA
  • We need to prevent noticeable delays for end users
  • We need to handle responses and update tokens in HTML5 storage
  • We need to handle login_required responses on the iframe
  • We need to handle errors and report them on the main window
  • We need to deal with blocked / timed out requests

OIDC Client and Token Renewal

Fortunately for us the OIDC Client library has a tested solution for token renewal, which does most of the above work for us, and which we look at shortly.

Where Are We?

We now have a better understanding of how the overall User Session will work for our SPA and we know what we want to implement.

Next Steps

  • We will continue with the theme of Session Management and discuss our Logout requirements
  • For subsequent samples and a list of all blog posts see the Index Page