Installing Factry Historian using Docker
factry historian is an open data platform designed to collect, store, and query time series data from industrial processes it acts as the central hub for operational data, making it easier to access, analyze, and integrate information across your production environment traditionally, process data is scattered across different systems such as plcs, scada, databases, and spreadsheets a historian brings this information together in one place, providing long term storage of high resolution process data fast access to both recent and historical trends integration capabilities with dashboards, analytics tools, and external applications a single source of truth for engineers, operators, and managers to base decisions on with factry historian, you get an open system that is api first, flexible, and built to integrate with modern data workflows this guide will help you get started by running factry historian locally using docker compose you will set up the core components, start the system, and install a first collector to bring in some data the goal of this tutorial is not to create a production ready installation after completing the steps, you will have a local environment where you can explore the platform, run queries, and visualize real time data it provides a safe way to test factry historian without touching production systems you can use it to explore the api, try out data queries, and visualize real time signals this guide also serves as a practical way to evaluate how factry historian can replace legacy historians, giving you insight into compatibility and functionality before a full scale deployment target audience this guide is written for technical users who are comfortable working with a {{cli}} , docker, and basic networking concepts prerequisites before you start, make sure your environment meets the following requirements required software docker and docker compose windows install docker desktop (incl docker compose) mac install docker desktop (incl docker compose) or install with a command (using homebrew package manager) brew install docker docker compose linux install docker desktop (incl docker compose) or install docker engine (or use a command to install both with your {{os}} package manager) to verify your docker and docker compose installation docker v docker compose version if docker compose v1 is installed, you need to use docker compose instead of docker compose which is only supported from v2 onwards (recommended) optional software git (to clone the setup files) recommended system resources for a smooth experience when running factry historian locally, we suggest at least cpu 2 cores memory 4 gb ram disk space 5 gb free networking all services will run on your local machine and are accessible through localhost by default, the following ports are used factry historian webpage and {{rest api}} 8000 factry historian {{grpc}} 8001 grafana webpage 3000 make sure these ports are available before starting the setup if any of them are already in use, you can adjust the mappings later in the advanced docker compose configuration security during this guide, you will create multiple credentials for the services it is recommended to store these credentials securely in your password manager downloading the docker setup to run factry historian locally, use the main docker compose file from the factrylabs repository it includes all required services and sensible defaults for a quick setup clone repo with git git clone https //github com/factrylabs/historian cd historian docker compose up d the repository provides a top level docker compose yml and a readme with quick start instructions an advanced/ directory contains a more customizable setup alternative download manually you can also download the docker compose file manually by browsing the repository on github and downloading the docker compose yml file to a directory of choice in bash (mac or linux) curl o https //raw\ githubusercontent com/factrylabs/historian/refs/heads/main/docker compose yml in powershell (windows) invoke webrequest uri https //raw\ githubusercontent com/factrylabs/historian/refs/heads/main/docker compose yml outfile docker compose yml can't clone or download from the repo? go to the factrylabs repository > click the green <> code button > download zip and unzip the file what's included the compose setup runs these services together on a local docker network factry historian (webpage + api) exposed at http //localhost 8000 grafana exposed at http //localhost 3000 default credentials and url's are documented in the repoâs readme change defaults if you keep the stack running beyond a short test about the âadvancedâ setup if you need more control (environment variables, cloud init automation), use the advanced/ directory described in the repo itâs intended for tailored evaluations beyond this out of the box quick start have a look at the advanced setup docid\ afzibxfvnoa9luyzhuygt and come back here once historian is running starting factry historian once you have downloaded the setup (and optionally configured your env file), you can start the stack with docker compose 1\ start the services from the root of the project, run docker compose up d this starts all containers in the background if you prefer to run the stack in the foreground, use docker compose up logs from each service will appear in your terminal 2\ verify the containers check that all containers are running docker ps you should see entries for historian postgres influxdb grafana if any container has an error status , inspect the logs for example docker compose logs historian 3\ confirm service availability after a short startup time, the services should be available at historian api http //localhost 8000 grafana http //localhost 3000 make sure to use http // in your browser some browsers override it automatically to https // , in which case use an incognito window completing the onboarding wizard open your browser and head to http //localhost 8000 , or an alternative url if you changed it you should see the login screen 1\ login screen username default is factry password default is password click login you will be asked to change the default password after first login 2\ change default password password enter a new password of your choice password confirmation re enter the same password click confirm to continue 3\ setup wizard welcome screen just click next to start the setup wizard 4\ setup wizard license two options activate a license enter a valid license key this is intended for production setups start trial read and check âi accept the end user license agreementâ and click 'start trial' (trial runs for 2 hours before auto shutdown) when in trial license mode, factry historian will shutdown every 2h all configuration is kept upon restarting the service 5\ setup wizard organization fill in details for the first organization name choose an organization name (e g default organization) description optional description postgresql user username for the read only postgresql user postgresql user password choose a password postgresql user password confirmation repeat the password click next to continue 6\ setup wizard internal database (influxdb) configure the internal time series database this database is used to store metrics and logs about factry historian itself database type select influx admin user enter the username for the influxdb admin if you did not change it, use factry admin password enter the password of the influxdb admin if you did not change it, use password host the host for influxdb, enter http //influxdb 8086 database database name (default internal factry ) read only user choose a username for read only access (e g ro ) this user will be created automatically read only password choose a password for the read only user leave the default check create database (recommended for a fresh setup) click next to continue 7\ setup wizard historian configuration general historian settings grpc port port used by collectors (default 8001 ) rest port port used by collectors (default 8000 ) url internal api url collectors will communicate with (enter http //historian ) session inactive duration login session timeout, default 7d base url external url for the historian webpage, e g http //localhost 8000 (or your hostname if running the docker compose stack remotely) click next to continue 8\ setup wizard finished factry historian orders you to have fun we agree and comply click finish to enter the application 9\ collectors screen (after setup) creating a time series database before you install your first collector, you need to create a time series databases where measurements will be stored 1\ open the time series databases page in the historian webpage, go to configuration > time series databases youâll see the internal system database ( internal factry ) already present, which stores metrics and logs about factry historian itself click + create database to add a new one for your own process data 2\ fill in the database details in the create database form, provide name enter a descriptive name (e g historian ) description optional (e g stores test data from realfakedata com ) database type select influx (default database for time series data) under settings admin user username for the influxdb admin if you did not change it, use factry admin password password for the admin user if you did not change it, use password host the influxdb host use http //influxdb 8086 by default database database name, for example historian read only user username for grafana access (e g grafana ) the user will be created or foreseen with the according permissions automatically read only password password for the read only user make sure create database is checked so historian will initialize it if it does not exist yet do not check create grafana data source that feature is deprecated in favor of the factry historian datasource plugin for grafana 3\ save and test click save & test historian will validate the connection and create the database if successful, the new database appears in the time series databases list if the test fails, check the influxdb container is running and that the host/port matches your docker compose setup installing your first collector collectors connect to external data sources (like opc ua servers, mqtt brokers, or sql servers) and retrieve data points from them in this guide, each collector runs as a separate docker container you create a collector in the factry historian webpage to obtain a token, then run the collector container with that token 1\ open the collectors page in the historian webpage, go to collectors youâll see built in system collectors (such as the calculation collector) already listed click + create collector to start adding a new one 2\ create a new collector in the create collector form name enter a descriptive name (e g realfakedata ) description optional (e g collects data from realfakedata com ) default database select the time series database you just created (e g historian ) click submit you will return to the collector overview page the collector will appear in the list in an initial initial state until its container is running with the correct token 3\ generate a token back on the collector overview page, select your newly created collector and click generate token copy the generated token somewhere safe by either selecting and copying, or by clicking copy to clipboard youâll use it to start the collector container the collector will appear in the list in an initial initial state until its container is running with the correct token 4\ run the collector container collecting data from realfakedata com in your {{cli}} , run the collector docker image with the token and mqtt sparkplugb product type docker run d restart unless stopped name factry collector realfakedata network=historian default e product=mqtt sparkplugb e api token=\<paste your token here> ghcr io/factrylabs/collector\ latest optionally, give the collector a unique name (e g name factry collector realfakedata ) replace \<paste token here> with the token from the previous step for the remainder of this quickstart guide though, we will configure the sparkplug b mqtt collector in favor of other collector types other data sources instead of mqtt sparkplugb , you could also use any of the following supported protocols opc ua (for opc ua data sources) opc da (for opc da data sources, can only run on a windows host) modbus (for modbus tcp data sources) mqtt sparkplugb (for mqtt data sources with {{sparkplugb}} messages) mqtt generic (for mqtt data sources with {{json}} messages) sql (for sql data sources) rest api json (for {{rest api}} sources with json payload) for example opc ua collector docker run d restart unless stopped name factry collector network=historian default e product=opc ua e api token=\<paste your token here> ghcr io/factrylabs/collector\ latest rest api collector (json payload, from preview channel) docker run d restart unless stopped name factry collector network=historian default e product=rest api json e release channel=preview e api token=\<paste your token here> ghcr io/factrylabs/collector\ latest modbus tcp collector docker run d restart unless stopped name factry collector network=historian default e product=modbus e api token=\<paste your token here> ghcr io/factrylabs/collector\ latest 5\ verify the collector is online return to the collectors overview page in the historian webpage the new collector should now show as initializing collector initializing collector when you select it, its heartbeat should be green with status "last seen just now" if it remains offline, check the logs docker logs f factry collector realfakedata 6\ configure the collector with the collector selected, click edit complete the collector specific settings for sparkplug b data from realfakedata com failover off mqtturl tcp\ //mqtt realfakedata com 1883 clientcertificate false username empty password empty topic spbv1 0/simulator/# qos 1 persistentsession on timestamplayout empty discovermeasurements true autoonboard true click save now, with the collector selected in the collector overview page, click start after a couple of seconds, the collector health should turn green with the status collecting collecting note that when stopping the collector, the data collection stops when resuming the collector, the data collection starts again and any missed data points are stored (the data points were buffered in the mqtt broker) this is not according to the mqtt sparkplub specification but given specific settings configured on the mqtt broker and the mqtt publishing client under the hood of realfakedata com verify auto onboarding of measurements because you configured the sparkplug b collector with autodiscovery and auto onboarding turned on, factry historian will automatically detect new measurements on the mqtt broker and onboard them with the original names 1\ open the measurements page in the historian webpage, go to measurements youâll see a list of measurements already listed select a measurement to review its settings scroll down to see a preview of the data coming in after a few minutes of data collection, the data preview should look similar to this configuring the factry historian datasource plugin for grafana grafana {{oss}} is included in the docker compose setup and provides dashboards and visualizations for your historian data to connect it to factry historian, you need to configure the factry historian datasource plugin 1\ log in to grafana open http //localhost 3000 in your browser log in with the grafana admin credentials you defined in the setup (default is admin / admin if you didnât override them) if prompted, change the default password 2\ configure the datasource in grafana, go to connections > data sources > add data source and select factry historian datasource from the list fill in the required fields name choose a recognizable name, e g factry historian url http //historian 8000 token go to the historian web app configuration > users select the factry historian reader user and click the edit icon on the right navigate to the 'tokens' tab and click + generate api token give the token a name and an optional expiry date then click submit copy the generated token somewhere safe by either selecting and copying, or by clicking copy to clipboard paste the token in the token field in grafana organization go to the historian webpage configuration > organizations select the default organization (unless you defined another name during the onboarding wizard), and copy the uuid using the double file icon on the far right paste the organization uuid in the organization field in grafana select measurements as the default tab 4\ save & test click save & test grafana should confirm a successful connection if grafana gives a 403 forbidden error, restart the factry historian docker container with docker compose restart historian , and click save & test again go to the historian webpage to confirrm the onboarding wizard visualizing data in grafana once the factry historian datasource is configured, you can begin exploring your data in grafana if grafana gives a 412 error endpoint not available until setup wizard is completed, go the historian webpage to confirrm the onboarding wizard go to explore in grafana select your factry historian datasource in the measurements tab, select the database you created in the installing factry historian using docker /#creating a time series database step choose a measurement (for example, simulator/line 1/bt1s/flowrate ) run the query and confirm that data points appear in the time series graph to build a dashboard go to dashboards > create dashboard add a new panel with add visualization select the historian datasource and configure your query adjust visualization options (time series, gauge, table, âŠ) save the dashboard this gives you a real time view of the signals collected by factry historian next steps you now have a running factry historian environment, with data flowing in from your first collector and visualizations available in grafana from here, you can expand your evaluation and explore more advanced capabilities organize your data define assets docid\ c4opdlucirj0draexrisp and link measurements to equipment in your plant use events docid 0ith2zlcais5yywhsdtje to capture production states, downtime, or process changes apply calculations docid\ opiodkgnngyjedx6zkg d for derived values or kpis add more collectors connect to different protocols such as opc ua, mqtt, modbus, or sql run multiple collectors in parallel to cover various data sources fine tune collector configuration for performance and reliability extend dashboards build grafana dashboards tailored to your use case set up alerts based on thresholds or conditions explore dashboard variables and annotations explore the api use the introduction to query raw measurement data, events, and asset information integrate historian data into external applications or analysis pipelines move towards production try the advanced setup docid\ afzibxfvnoa9luyzhuygt for more control over credentials, ports, and volumes explore deployment on dedicated infrastructure (linux vm) contact factry for licenses, support, or migration assistance from legacy historians stopping and removing the setup when you are done testing factry historian, you can stop the containers and optionally remove all related resources 1\ stop the containers from the project directory, run docker compose down this stops all historian, database, and grafana containers while keeping the volumes and networks intact your data will still be available the next time you start the stack 2\ remove containers, networks, and volumes if you want to completely reset the environment, add the volumes flag docker compose down volumes this will also delete all postgresql and influxdb data stored in docker volumes, so youâll start fresh next time if you want to preserve test data (e g to continue your evaluation later), run docker compose down without volumes this way, data remains available when you restart the stack with docker compose up d 3\ remove unused images (optional) to free up disk space, remove the docker images as well docker image prune a this deletes all images that are not used by running containers 4\ verify cleanup run docker ps a docker volume ls docker network ls these commands let you check if any containers, volumes, or networks from the historian setup remain