Helios FHIR Server Installation Guide
Introduction
In this document, we provide step-by-step instructions for users to rapidly get up and running with the Helios FHIR Server. These instructions are ideal for developers, system administrators, and users who wish to try out the Helios FHIR Server quickly.
Provided below are installation instructions for:
- Docker - running Helios FHIR Server in a Docker container
- Native - running Helios FHIR Server on native hardware such as a developer's laptop or a dedicated server.
- AWS - running Helios FHIR Server in Amazon Web Services.
The Helios FHIR Server runs on Apache Cassandra, and a Quickstart Guide is also provided below for both Docker and Native Cassandra installations. Apache Cassandra's documentation web site can be found here.
Prerequisites
Docker Installations
Ensure that you have set your memory setting to at least 24 GB in Settings -> Resources -> Memory. The default value is NOT sufficient for running Helios FHIR Server
- Cassandra running in a Docker container - Cassandra Quickstart Guide
Native Installations
- Java 21 - either Oracle Java Standard Edition 21 or OpenJDK 21
- Cassandra installed locally - Cassandra Quickstart Guide
AWS Installations
- You must have an AWS Account
- You must also have Terraform installed.
Installing Helios FHIR Server
Docker Install Instructions
Docker Compose Instructions
Create a file named docker-compose.yml
file with the contents below, then run the following command:
docker compose up
services:
helios-fhir-server:
image: gcr.io/helios-fhir-server/enterprise-edition:latest
container_name: helios-fhir-server
hostname: helios-fhir-server
ports:
- 8181:8181
environment:
- CASSANDRA_PROPERTIES_CONTACTPOINTS=cassandra
- KAFKA_PROPERTIES_BOOTSTRAPSERVERS=kafka:19092
- SYSTEM_EXPORTPATH=/bulk-exports
- CASSANDRA_PROPERTIES_REQUESTTIMEOUTINMS=8000
networks:
- hfs-network
volumes:
- bulk-exports:/bulk-exports
healthcheck:
test: curl -f http://helios-fhir-server:8181/fhir/healthcheck || exit 1
interval: 30s
timeout: 10s
retries: 20
depends_on:
cassandra:
condition: service_healthy
kafka:
condition: service_healthy
kafka-consumer:
image: gcr.io/helios-fhir-server/kafka-consumer:latest
container_name: "kafka-consumer"
hostname: kafka-consumer
environment:
- CASSANDRA_PROPERTIES_CONTACTPOINTS=cassandra
- KAFKA_PROPERTIES_BOOTSTRAPSERVERS=kafka:19092
- SYSTEM_SQLONFHIRSERVERURL=http://sof-js:3000
- SYSTEM_SCHEMAREGISTRYURL=http://schema-registry:8081
- SYSTEM_EXPORTPATH=/bulk-exports
networks:
- hfs-network
volumes:
- bulk-exports:/bulk-exports
depends_on:
cassandra:
condition: service_healthy
kafka:
condition: service_healthy
helios-fhir-server:
condition: service_healthy
cassandra:
image: cassandra:4.1.5
container_name: "cassandra"
hostname: cassandra
ports:
- "${CASSANDRA_PROPERTIES_PORT}:9042"
networks:
- hfs-network
healthcheck:
test: ["CMD", "cqlsh", "-e", "describe keyspaces" ]
interval: 30s
timeout: 10s
retries: 5
zookeeper:
image: confluentinc/cp-zookeeper:7.3.2
container_name: "zookeeper"
hostname: zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_SERVERS: zookeeper:2888:3888
ZOOKEEPER_TICK_TIME: 2000
networks:
- hfs-network
healthcheck:
test: echo ruok | nc localhost 2181 || exit 1
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
kafka:
image: confluentinc/cp-kafka:7.3.2
container_name: "kafka"
hostname: kafka
environment:
KAFKA_ADVERTISED_LISTENERS: "INTERNAL://kafka:19092,EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:${KAFKA_PROPERTIES_PORT},DOCKER://host.docker.internal:29092"
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT,DOCKER:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
KAFKA_BROKER_ID: 1
KAFKA_BROKER_RACK: "rack1"
KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_JMX_PORT: 9999
KAFKA_JMX_HOSTNAME: ${DOCKER_HOST_IP:-127.0.0.1}
KAFKA_AUTHORIZER_CLASS_NAME: kafka.security.authorizer.AclAuthorizer
KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "true"
KAFKA_NUM_PARTITIONS: 1
networks:
- hfs-network
healthcheck:
test: kafka-cluster cluster-id --bootstrap-server 127.0.0.1:${KAFKA_PROPERTIES_PORT} || exit 1
interval: 1s
timeout: 60s
retries: 60
depends_on:
- zookeeper
schema-registry:
image: confluentinc/cp-schema-registry:7.3.2
container_name: "schema-registry"
hostname: schema-registry
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: "kafka:19092"
SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081
networks:
- hfs-network
healthcheck:
test: curl -f http://localhost:8081/subjects || exit 1
interval: 10s
timeout: 5s
retries: 5
start_period: 15s
depends_on:
zookeeper:
condition: service_healthy
kafka:
condition: service_healthy
parquet-writer:
image: gcr.io/helios-fhir-server/parquet-writer:latest
container_name: "parquet-writer"
hostname: parquet-writer
environment:
- KAFKA_BOOTSTRAP_SERVERS=kafka:19092
- SCHEMA_REGISTRY_URL=http://schema-registry:8081
- OUTPUT_PATH=/bulk-exports
- POLL_DURATION_SECONDS=1
- MAX_FILE_SIZE_BYTES=1073741824
- FILE_TIMEOUT_SECONDS=20
networks:
- hfs-network
volumes:
- bulk-exports:/bulk-exports
depends_on:
kafka:
condition: service_healthy
schema-registry:
condition: service_healthy
sof-js:
image: gcr.io/helios-fhir-server/sof-js:latest
container_name: "sof-js"
hostname: sof-js
networks:
- hfs-network
ports:
- "${SOFJS_PROPERTIES_PORT}:3000"
networks:
hfs-network:
name: hfs-network
driver: bridge
volumes:
bulk-exports:
When done, the following command will COMPLETELY destroy all containers, networks, volumes and images created by the above docker-compose.yml
file. Use with caution.
docker-compose down -v --rmi all
Docker Command Instructions
Alternatively, if you wish to use individual Docker commands and not use a Docker compose file, follow these instructions below. These instructions assume you have completed the Cassandra Docker Install steps.
Startup Helios FHIR Server in a container named helios-fhir-server
, connecting to a Cassandra instance named cassandra
, on the hfs-network
and exposing port 8181 which we will use later to access the administrative user interface with a browser.
docker run -d -p 8181:8181 -e CASSANDRA_PROPERTIES_CONTACTPOINTS=cassandra --name helios-fhir-server --net hfs-network gcr.io/helios-fhir-server/enterprise-edition:latest
During initial startup, the logs will pause at the below messages. Karaf is performing initial installation tasks which may take several minutes. This startup delay only happens on the first startup, and is normal.
INFO [features-3-thread-1] Stopping bundles: INFO [features-3-thread-1] org.ops4j.pax.logging.pax-logging-log4j2/2.1.3
Next, use the /opt/apache-karaf/bin/client
command in the Docker Terminal window, or use the following command to connect to your running Helios FHIR Server instance Karaf console.
docker exec -it helios-fhir-server client
Next, check the status using the Helios FHIR Server Karaf Shell.
Native Installation Instructions
Set $JAVA_HOME
to point to your Java 21 installation (macOS)
export JAVA_HOME=`/usr/libexec/java_home -v 21`
Download the current Helios FHIR Server distribution
Enterprise Edition Download Link Site
Extract the Helios FHIR Server distribution
tar xzvf helios-fhir-server-distributable-{version}-EnterpriseEdition.tar.gz
Set the KARAF_HOME Environment Variable
cd helios-fhir-server-distribution-{version}
export KARAF_HOME=$(pwd)
Alternatively, you can add the export KARAF_HOME=[Helios FHIR Server root folder location]
command to your ~/.bashrc
or ~/.bash_profile
file.
Start Helios FHIR Server
bin/karaf
This starts the Helios FHIR Server and presents you with an interactive shell.
AWS Installation Instructions
A Reference Architecture and Terraform-based installation instructions have been created to help AWS installations get up and running quickly. The Reference Architecture consists of:
- A Virtual Private Cloud to isolate all services for the Helios FHIR Server
- A 3 node Cassandra cluster running in across two availability zones
- The Helios FHIR Server configured to run in an EKS Cluster, and configured for autoscaling
Please follow the step-by-step instructions here to setup Helios FHIR Server in your AWS account.
Using the Helios FHIR Server Karaf Shell
At the Karaf prompt, you can try log:tail
or bundle:list
to interact with the Helios FHIR Server and see some interesting information about the system.
log:tail
tails the Helios FHIR Server system log. You can inspect this log for a message that looks like the lines below to know when the Helios FHIR Server has completed its startup.
12:32:55.471 INFO [features-3-thread-1] Deployment finished. Registering FeatureDeploymentListener
12:32:55.770 INFO [features-3-thread-1] Done.
-
bundle:list
lists all the Bundles that are running in the Helios FHIR Server. This can be an easy way to check if your system is healthy and if the features you want are running.
Check the health of Helios FHIR Server
Using the Karaf shell, run the command bundle:list | grep -i failure
. This command should show any bundles that failed to startup correctly. An empty result indicates success. Should you have a failure, run the command bundle:diag <failed bundle id>
to display the error log for that bundle.
The Healthcheck Endpoint
The Helios FHIR Server provides a healthcheck endpoint at http://localhost:8181/fhir/healthcheck
. This endpoint can be used to verify that the Helios FHIR Server is running and healthy.
This endpoint HTTP response code will return 200 OK if the Helios FHIR Server is running and healthy, and 503 Service Unavailable if the Helios FHIR Server is not healthy.
This bash command can be used to check the health of the Helios FHIR Server and wait for it to load properly:
while test $(curl -LI localhost:8181/fhir/healthcheck -o /dev/null -w '%{http_code}\n' -s) = "200" ; do sleep 5; done
Alternatively, the healthcheck endpoint also provides an array of bundles that are loading or in a failure state in the HTTP response body. If the endpoint returns []
then the list of bundles is empty and the Helios FHIR Server is healthy.
Log in to the Helios FHIR Server interactive web user interface
Navigate to http://localhost:8181/ui
in a browser. You will be prompted to log in, the default credentials are username: admin@heliossoftware.com
and password: admin
.
From here, you can check the status of the system at a glance on the landing dashboard, configure the system, enable and disable FHIR Resources and Search Parameters, and manage the Helios FHIR Server users.
Post Installation Steps
After you have verified the above steps, you should perform the follow post-installation steps:
- Import Initial Data - The Helios FHIR Server ships with datasets that MUST be loaded into the system.
- If you are using a Docker-based installation method, run:
docker exec -it helios-fhir-server /opt/apache-karaf/bin/import-initial-data
- All other installation methods, run
$KAFKA_HOME/bin/import-initial-data
- If you are using a Docker-based installation method, run:
- Configure Authentication - The Helios FHIR Server ships with open authentication (i.e. no authentication). See the Authentication options and configure for your environment.
- Configure System Settings - Change the Full URL setting for your environment. This can also be set in the Helios FHIR Server Dashboard under the Settings menu.
- Generate a new JWT key pair
- In the
$KARAF_ROOT/etc
folder, execute the following commands:mv helios-keystore.jks helios-keystore.jks.old
- In the next command, change the
-alias
value fromhelios
to another value of your choice and-storepass
fromchangeit
to another value keytool -genkeypair -keystore helios-keystore.jks -alias helios -keyalg RSA -sigalg SHA384withRSA -storepass changeit
- Change the
keystoreAlias
andkeystorePassword
values in Authentication Settings.
- In the
- After you have setup TLS/HTTPS for your server, change
blockUnsecureRequests
totrue
in Authentication Settings. - If you are using our Reference Architecture, change the
default
value ofrequire_tls
in variables.tf totrue
and then runterraform apply
to update the Helios FHIR Server installation.
Cassandra Quickstart Guide
Cassandra Docker Install
The easiest way to get started with Cassandra is by using the Cassandra Docker container by running the following series of commands:
First, create a network such that Helios FHIR Server can talk to Cassandra by name. Our network will be called hfs-network
.
docker network create -d bridge hfs-network
Next, startup Cassandra in a container named cassandra
, exposing the 9042 port so that Helios FHIR Server may connect.
docker run -d -p 9042:9042 --name cassandra --net hfs-network cassandra:4.1.5
Finally, check if Cassandra has started successfully.
Cassandra Native Install
The instructions below will walk you through the steps for installing Cassandra on your local machine or on a dedicated server.
First, make sure you have Java 11 installed and that it is your current version.
java --version
openjdk version "11.0.12" 2021-07-20
OpenJDK Runtime Environment Homebrew (build 11.0.12+0)
OpenJDK 64-Bit Server VM Homebrew (build 11.0.12+0, mixed mode)
From here, you can follow the instructions provided by
Apache Cassandra
(Stop when you get to the header Installing the Debian packages
)
Alternatively, we have reproduced the necessary steps here:
Download the Apache Cassandra distribution
Unpack the Apache Cassandra distribution
tar xzvf apache-cassandra-4.1.5-bin.tar.gz
Start Apache Cassandra
cd apache-cassandra-4.1.5
bin/cassandra
Check the Status of Cassandra
Wait until Cassandra has fully started. You should see a message saying "state jump to NORMAL" followed by "Startup complete" in Cassandra's log output.
Alternatively, use the nodetool
utility to check on Cassandra's status.
bin/nodetool status
Once the result is UN
, which stands for Up/Normal
, the Cassandra installation is ready to use with Helios FHIR Server.