Creating Streams and Exporters with the CLI

The Stream Machine CLI is a command line interface for interacting with it. The cli hasn’t been published yet, so ask Bart or Robin

The cli is nothing more than a thin interface to the OpenApi interface. You can actually do all the cli actions described below via the OpenApi interface. You do need an authentication token though (see below) [1].

Most of the cli commands provide valid json output, so you can use them in command pipes with the excellent jq tool.

Logging in

First create an account at and use the email and password to log in with the cli.

strm auth login
Enter your Stream Machine portal email address [strm-bart@....]:  ...
Please enter your Stream Machine portal password:
Succesfully logged in as strm-bart@...

This creates a file credentials.json in the application configuration directory [2].

You can use strm auth show to find the billing-id that has been assigned to you. You need this with the client drivers.

strm auth show
Credentials for strm-bart@...
Billing id = strmbart8891421710
Current token valid until = 2021-01-14T16:28:16 UTC

Or if you like json

strm auth show --json
    "email": "",
    "billingId": "strmbart8891421710",
    "expiresAt": 1610641696,
    "idToken": "eyJhbGciOiJSUzI1NiIsImtpZCI6IjVmOTcxMm....",
    "refreshToken": "AOvuKvQjJ3VzF12-5qo7crLryG9fz5979Zss...."

Note that the cli has command completion. Add source <(strm --generate-completion bash|zsh|fish) to your shell configuration file [3]

Creating a stream

strm streams list

And create one [4]

strm streams create bart
    "name": "bart",
    "tags": [],
    "credentials": {
        "clientId": "3p5hau1zunkyba0...",
        "clientSecret": "Z^H^s_Xdm!h9W..."

The billingId, clientId and clientSecret triplet is what identifies your stream when you interact with Stream Machine.

So with this information, you have enough information to start sending data to With the same credentials you can connect to the egress endpoint with a websocket client to receive the events as you sent them.

Creating decrypted streams

If you want to have Stream Machine decrypt data with certain consent levels, you need to create an output stream.

strm outputs create --help
Usage: strm outputs create [OPTIONS] name output-name

  Create a new output

  -cl, --consent-level INT         The consent levels for this
  -clt, --consent-level-type [GRANULAR|CUMULATIVE]
  -d, --description TEXT           Optional description to describe
                                   the purpose of this output.
  -t, --tag TEXT                   Optional tag for this output.
  -dn, --decrypter-name TEXT       Name for the decrypter that will
                                   produce this output.
  -h, --help                       Show this message and exit

  name         The name of the stream for which an output is
  output-name  The name of the output that is created.

Note: decrypter-name currently has no use. It’s better to leave it at its default value.

So let’s create one, with two consent levels, and a granular interpretation.

strm outputs create bart level0-1 -cl 0 -cl 1  -clt GRANULAR
  "linkedStream": "bart",
  "name": "level0-1",
  "tags": [],
  "credentials": {
    "clientId": "wvs5pkr6q48...",
    "clientSecret": "8IhY!HnK(...#"
  "consentLevelType": "GRANULAR",
  "consentLevels": [ 0, 1 ],
  "decrypterName": "default"

So this output stream named level0-1 captures data from encrypted stream bart (for my billing-id). It will drop all events that don’t at least have consent levels 0 and 1 in the event. Another way of defining decrypted streams is with consent level type cumulative. This means that the decrypted stream has just one consent level, and it will accept all events that have at least that consent level. It will decrypt PII fields up to and including the decrypted stream consent level.

Exporting to S3

If you want to export stream data to AWS S3, you first need to create a "sink" pointing to the S3 bucket.

Create a Sink

You first need to create an AWS S3 json credentials file that gives Stream Machine write [5] access to a specific bucket. Follow along with this tutorial which boils down to

aws iam create-user --user-name streammachine-bucket1
aws iam attach-user-policy --user-name streammachine-bucket1 \
    --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess
aws iam create-access-key --user-name streammachine-bucket1 > s3.json
strm sinks create s3 s3-bart --bucket-name bvandeenen-streammachine-bucket -cf s3.json
"sinkType": "S3",
"sinkName": "s3-bart",
"bucketName": "bvandeenen-streammachine-bucket"

You can see all your sinks with strm sinks list.

Create an exporter

strm exporters create --help
Usage: strm exporters create [OPTIONS] streamName

  Create a new exporter

  -en, --exporter-name TEXT     Name of the exporter.
  -sn, --sink-name TEXT         The name of the sink that should be
                                used for this exporter.
  -st, --sink-type [GCLOUD|S3]  The type of sink that should be used
                                for this exporter.
  -i, --interval INT            The interval in seconds between each
                                batch that is exported to the configured sink.
  -pp, --path-prefix TEXT       Optional path prefix. Every object
                                that is exported to the configured sink will
                                have this path prepended to the resource
  -h, --help                    Show this message and exit

  streamName  Name of the stream that is being exported

Let’s create an exporter on the 'level0-1' output that we just created. Exporter names are unique per connected stream, so you could always call them 's3' for instance.

strm exporters create -en s3 -sn s3-bart -st S3 -i 30 -pp level01 level0-1
  "name": "s3",
  "linkedStream": "level0-1",
  "sinkName": "s3-bart",
  "sinkType": "S3",
  "intervalSecs": 30,
  "type": "BATCH",
  "pathPrefix": "level01"

I’m having a simulator running that’s sending some data. I’m going the my AWS S3 bucket, and I see files:

aws s3 ls bvandeenen-streammachine-bucket/docs/
2021-01-15 10:52:31      34592 2021-01-15T09:52:30-stream-c8f38550-6717-49e5-9ea8-cfb3d11bbe0c{0,1,2,3,4}.jsonl
2021-01-15 10:53:01      46133 2021-01-15T09:53:00-stream-c8f38550-6717-49e5-9ea8-cfb3d11bbe0c{0,1,2,3,4}.jsonl
2021-01-15 10:53:31      46142 2021-01-15T09:53:30-stream-c8f38550-6717-49e5-9ea8-cfb3d11bbe0c{0,1,2,3,4}.jsonl
2021-01-15 10:54:01      47679 2021-01-15T09:54:00-stream-c8f38550-6717-49e5-9ea8-cfb3d11bbe0c{0,1,2,3,4}.jsonl
2021-01-15 10:54:31      45365 2021-01-15T09:54:30-stream-c8f38550-6717-49e5-9ea8-cfb3d11bbe0c{0,1,2,3,4}.jsonl

And having a look inside one of the files [6]

aws s3 cp s3://bvandeenen-streammachine-bucket/docs/2021-01-15T09:52:30-stream-c8f38550-6717-49e5-9ea8-cfb3d11bbe0c\{0,1,2,3,4\}.jsonl - | head -1
{"strmMeta": {"schemaId": "nps_unified_v1", "nonce": 1791348613, "timestamp": 1610704326171, "keyLink": -1611778836, "billingId": "strmbart8891421710", "consentLevels": [0, 1, 2]}, "brand_source": "", "platform": "", "os": "", "version": "", "device_id": "heel mooi", "customer_id": "", "consent_level": "", "session_id": "", "swimlane_id": "", "swimlane_rank": 8, "swimlane_header": "", "swimlane_in_view": 30, "swimlane_action": "", "article_id": "", "article_rank": 1, "article_title": "", "article_in_view": 10, "article_action": "", "followable_id": "", "followable_rank": 0, "followable_title": "", "followable_type_value": false, "followable_type": "", "followable_context": "", "followable_in_view": 20, "followable_clicked": true, "followable_action": ""}

Using the OpenApi User Interface

If you just try one of the calls on the Swagger page you’ll receive:

  "error": {
    "code": 401,
    "status": "Unauthorized",
    "message": "The request could not be authorized"

First ask the cli for a token: strm auth print-access-token. Paste the token into the api page. authorize button

Now you can have a look at all capabilities of the api.

1. strm auth print-access-token
2. ~/.config/stream-machine
3. ~/.bashrc or ~/.zshrc
4. tags are defined but not yet used anywhere in Stream Machine
5. AmazonS3FullAccess below is probably too much
6. the inconvenient { and } in the filenames will be changed.