Factry Historian MCP Server (HTTP mode)
this guide walks you through running the factry historian mcp server in http mode use this when you have a custom llm application or a web based ai platform that connects to mcp servers over http rather than launching them as a local process if you are using claude desktop or a similar desktop ai client, the docid\ hf arffozskiydjuzoulh is simpler and recommended instead when to use http mode scenario recommended transport claude desktop, cursor, vs code copilot stdio (see docid\ hf arffozskiydjuzoulh ) custom llm application or internal ai platform http web based ai tools that connect to remote mcp servers http multiple users or clients sharing one mcp server http prerequisites before you start, make sure you have a running factry historian instance the api url of your historian instance (e g https //historian example com/api ) an api token with access to factry historian docid vr6prldwhulaweklzwol your organization uuid, visible in the historian web interface under organization settings docid\ odmadpifqlvqbpgfarnxi docker installed on the server that will run the mcp server install from the official docker website mac / windows https //www docker com/products/docker desktop/ linux https //docs docker com/engine/install/ a domain name and tls certificate if you intend to expose the server to an external llm platform (required for production, see ) basic setup (no authentication) this is the quickest way to get the server running it is suitable for testing on a private network, or when your llm app and the mcp server are on the same machine do not expose this setup to the internet without adding authentication start the container docker run d \\ \ name historian mcp \\ p 8080 8080 \\ e factry historian api url=https //historian example com/api \\ e factry historian api token=your historian token \\ e factry historian organization uuid=your org uuid \\ e transport mode=http \\ e http host=0 0 0 0 \\ e http port=8080 \\ factry/mcp server\ latest the server is now reachable at http //your server 8080 verify it is running curl http //localhost 8080/health you should receive a 200 ok response with a json body like {"status" "healthy","time" " " } test with mcp inspector the https //modelcontextprotocol io/docs/tools/inspector is a tool for testing mcp servers install and launch it with npx @modelcontextprotocol/inspector then open the inspector ui in your browser and select streamablehttp as the transport type enter url http //localhost 8080 click connect open the tools tab and try calling search assets exposing over https most llm platforms require the mcp server to be reachable over https on port 443 even if your platform accepts a custom port, using https is strongly recommended to protect data in transit run the mcp server on localhost and put a reverse proxy in front of it the proxy handles tls and forwards traffic to the container bind the container port to localhost only when a reverse proxy handles all external traffic, there is no reason for the container's http port to be reachable from outside the host use 127 0 0 1 8080 8080 instead of 8080 8080 in your docker run command to bind the port to localhost only docker run d \\ \ name historian mcp \\ p 127 0 0 1 8080 8080 \\ factry/mcp server\ latest this ensures the unauthenticated http port is never directly reachable from the network, only the reverse proxy can reach it with caddy (recommended, automatic tls) install caddy and create a caddyfile mcp your domain com { reverse proxy localhost 8080 } caddy automatically obtains and renews a let's encrypt certificate start it with caddy run with nginx server { listen 443 ssl; server name mcp your domain com; ssl certificate /path/to/fullchain pem; ssl certificate key /path/to/privkey pem; location / { proxy pass http //localhost 8080; proxy http version 1 1; proxy set header host $host; proxy set header x real ip $remote addr; proxy set header connection ''; proxy buffering off; } } the proxy buffering off line is important, it ensures server sent events (sse) are streamed in real time rather than buffered after setting up the reverse proxy, your mcp server is reachable at https //mcp your domain com keep http host=0 0 0 0 in the container if the reverse proxy is on the same host if the proxy is on a different machine, make sure the port is reachable from that machine and not exposed to the wider internet directly authentication when the mcp server is accessible over a network, you should enable authentication so that only authorized users and applications can use it the server supports jwt authentication , which is compatible with any standard oauth 2 0 / openid connect provider, including azure ad, okta, auth0, and keycloak how it works your llm app obtains a jwt access token from your identity provider (it likely already does this as part of user login) your app passes the token in every mcp request as authorization bearer \<token> the mcp server validates the token's signature and expiration, and (optionally) checks for required scopes option a jwks validation (recommended for azure ad and other oauth providers) most identity providers sign tokens with a private key and publish the corresponding public keys at a jwks endpoint this is the most secure option and handles key rotation automatically add these variables to your docker run command e auth provider=jwt \\ e jwks url=https //your identity provider com/ well known/jwks json common jwks urls provider jwks url azure ad (entra id) https //login microsoftonline com/{tenant id}/discovery/v2 0/keys azure ad fs (on premises) https //your adfs example com/adfs/discovery/keys okta https //your domain okta com/oauth2/default/v1/keys auth0 https //your domain auth0 com/ well known/jwks json keycloak https //your keycloak com/realms/your realm/protocol/openid connect/certs replace {tenant id} with your azure ad directory (tenant) id, visible in the azure portal under azure active directory > overview full example for azure ad docker run d \\ \ name historian mcp \\ p 8080 8080 \\ e factry historian api url=https //historian example com/api \\ e factry historian api token=your historian token \\ e factry historian organization uuid=your org uuid \\ e transport mode=http \\ e http host=0 0 0 0 \\ e http port=8080 \\ e auth provider=jwt \\ e jwks url=https //login microsoftonline com/{tenant id}/discovery/v2 0/keys \\ e required scopes=api //your app id/historian read \\ factry/mcp server\ latest required scopes is optional but recommended, it ensures only tokens issued for the mcp server are accepted define a scope in your identity provider's app registration (e g historian read ) and pass its full uri here audience validation (strongly recommended) without audience validation, any token your identity provider issues for any application in the same tenant is accepted as long as the signature is valid set expected audience to the identifiers your tokens will carry e expected audience=api //your app id,your app id accepts a comma separated list of audience strings azure ad v1 tokens use aud = api //{app id} , v2 tokens often use aud = {app id} (bare guid), listing both covers both cases if a token's aud claim doesn't match any of the configured values, the request is rejected with 401 option b shared secret (hmac) if your llm app generates its own jwt tokens rather than using an oauth provider, you can validate them with a shared secret e auth provider=jwt \\ e jwt secret=your secret key here use a long, randomly generated string tokens must use the hs256 algorithm and include exp and sub claims the shared secret must be kept confidential anyone who has it can generate valid tokens prefer jwks validation when using an external identity provider oauth 2 0 discovery (optional) some mcp clients (browser based assistants, ide integrations) don't let you paste a bearer token, they expect to run the oauth flow themselves for that to work, the server needs to tell them where to send the user for authentication this is done via the https //datatracker ietf org/doc/html/rfc9728 endpoint at / well known/oauth protected resource to enable discovery, set two additional environment variables e mcp public url=https //mcp your domain com \\ e oauth authorization server=https //login microsoftonline com/{tenant id}/v2 0 mcp public url the public url clients reach the server on (behind your reverse proxy) when this is set, the server publishes metadata at / well known/oauth protected resource and sends a www authenticate bearer resource metadata= header on 401 responses leave unset to disable discovery (bearer token only mode) oauth authorization server the authorization server's issuer url for azure ad this is https //login microsoftonline com/{tenant id}/v2 0 without this, the metadata document is published but doesn't point clients anywhere if required scopes is set, those scopes are also advertised in the metadata as scopes supported dynamic client registration shim (for turnkey mcp clients) some mcp clients require https //datatracker ietf org/doc/html/rfc7591 they refuse to proceed unless they can post to a registration endpoint and receive a client id most enterprise identity providers (azure ad / entra id, okta without admin consent, on prem adfs) don't offer dcr the server can bridge this gap by accepting registration requests itself and handing out a pre registered client id you configure in your idp the shim is provider agnostic, it works with any oidc provider activate it by setting oauth registered client id e oauth registered client id=your preconfigured client id when the shim is active the server also serves rfc 8414 as metadata at / well known/oauth authorization server advertising itself as the authorization server updates the protected resource metadata so authorization servers points at this server (clients discover our as metadata rather than the upstream's) serves /authorize (302 redirect to the upstream authorize endpoint, query params forwarded) and /token (reverse proxy to the upstream token endpoint) how it finds upstream endpoints on startup the server does oidc discovery at {oauth authorization server}/ well known/openid configuration to find authorization endpoint and token endpoint if your provider doesn't support oidc discovery, set both explicit overrides e oauth authorize url=https //your idp example com/oauth2/authorize \\ e oauth token url=https //your idp example com/oauth2/token no tokens are issued or signed by this server the shim only proxies endpoints and fakes registration all jwts continue to come from your upstream idp and are validated against jwks url as before what to do in your idp register one oauth application manually with the client id you'll pass as oauth registered client id register as many redirect uris as you need (one per mcp client you plan to support) the shim hands out the same client id to every mcp client that registers, each client uses its own redirect uri, which the idp validates against the list you registered testing authentication with mcp inspector bearer token (simplest) obtain a valid token from your identity provider launch the inspector npx @modelcontextprotocol/inspector select streamablehttp , enter your url, and expand custom headers add header name authorization with value your token here click connect full oauth flow (requires discovery enabled) launch the inspector and select streamablehttp enter your server url and open the auth tab inspector fetches metadata from / well known/oauth protected resource , redirects you to the authorization server (e g microsoft login), and exchanges the resulting code for a token for this to work, register inspector's redirect uri (shown in its ui) under authentication in your azure ad app registration if you get http 401 trying to load well known oauth metadata in inspector, mcp public url isn't set verify the metadata endpoint is reachable without authentication curl https //mcp your domain com/ well known/oauth protected resource connecting your llm application your llm app connects to the mcp server using a streamable http mcp transport the exact api depends on which mcp sdk your app uses, but the connection details are always url https //mcp your domain com (or http //localhost 8080 for local testing) transport streamable http (also called streamablehttp or http+sse depending on the sdk) header authorization bearer \<token> (when authentication is enabled) the server exposes a single endpoint at / which handles both post (for sending requests) and get (for receiving server sent events) configuration reference variable required default description factry historian api url yes — url to your factry historian api factry historian api token yes — token for authenticating with historian factry historian organization uuid yes — uuid of the organization in historian transport mode yes stdio set to http to enable http mode http host no localhost interface to bind to default localhost binds to all interfaces ( / 0 0 0 0 ) use 127 0 0 1 to restrict to localhost only http port no 8080 port to listen on auth provider no none authentication method none or jwt jwks url conditional — jwks endpoint for public key validation (rs256/es256) required when auth provider=jwt and no jwt secret is set jwt secret conditional — shared secret for hmac validation (hs256) required when auth provider=jwt and no jwks url is set required scopes no — comma separated list of scopes that must be present in the token expected audience no — comma separated list of accepted aud claim values strongly recommended to prevent cross resource token reuse mcp public url no — public url of this mcp server when set, enables rfc 9728 oauth discovery via / well known/oauth protected resource oauth authorization server no — oauth authorization server issuer url (e g https //login microsoftonline com/{tenant}/v2 0 ) advertised in the discovery metadata and used for oidc discovery when the dcr shim is enabled oauth registered client id no — pre registered client id to hand out on /register setting this activates the dcr shim (as metadata, /register , /authorize , /token ) oauth authorize url no — explicit upstream authorize endpoint overrides oidc discovery if set useful when your idp doesn't publish a well known/openid configuration document oauth token url no — explicit upstream token endpoint same semantics as oauth authorize url grafana url no — url to your grafana instance (enables grafana tools) grafana api key no — grafana service account token log level no info log verbosity debug , info , warn , error statistics disabled no false set to true to disable anonymous usage statistic troubleshooting the server is unreachable from my llm app check that the container is running docker ps verify the reverse proxy is running and listening on port 443 confirm dns for your domain resolves to the correct server test from the server itself curl http //localhost 8080/health requests are rejected with 401 unauthorized confirm auth provider=jwt is set and the token is included in the authorization header check the token has not expired verify jwks url is reachable from inside the container docker exec historian mcp wget qo $jwks url check the token contains the required scopes if required scopes is set server sent events are not streaming (responses are delayed) ensure proxy buffering off is set in nginx, or that your reverse proxy does not buffer responses some load balancers have their own buffering settings, consult your proxy's documentation the container exits immediately run it without d to see the error output docker run rm \\ e factry historian api url=https //historian example com/api \\ e factry historian api token=your token \\ e factry historian organization uuid=your uuid \\ e transport mode=http \\ factry/mcp server\ latest common causes missing or misspelled environment variables, invalid historian api token, or an invalid auth provider / transport mode value