container

A container step is used to run a Docker container.

Important! Flowpipe container steps require Docker to be running. Make sure you have the Docker daemon installed and running.

Each container definition is a distinct instance of the step (a distinct container image). A looped step (via for_each) will create a container instance for each iteration.

pipeline "another_pipe" {
step "container" "aws_s3_ls" {
image = "public.ecr.aws/aws-cli/aws-cli"
cmd = ["s3", "ls"]
env = {
AWS_ACCESS_KEY_ID = "AKIAFAKEKEY25WNAQKRC"
AWS_SECRET_ACCESS_KEY = "+bOnLljsFEZfakekeyJWQq7kkQaOsh5JbkTQH5YR"
}
}
output "buckets" {
value = step.container.aws_s3_ls.stdout
}
}

Arguments

ArgumentTypeOptional?Description
imageStringOptional*The docker image to use, eg turbot/steampipe:latest. You must specify image or source but not both.
sourceStringOptional*The path to a folder that contains the dockerfile or containerfile to build the container. You must specify image or source but not both.
cmdList<String>OptionalThe cmd to use to start the container.
cpu_sharesNumberOptionalCPU shares (relative weight) for the container.
entrypointList<String>OptionalOverwrite the default ENTRYPOINT of the image. The entrypoint allows you to configure a container to run an executable.
envMap of StringsOptionalA map of string to set environment variables
memoryNumberOptionalAmount of memory in MB your container can use at runtime. Defaults to 128.
memory_reservationNumberOptionalAllows you to specify a soft limit smaller than memory which is activated when Docker detects contention or low memory on the host machine. If you use memory-reservation, it must be set lower than memory for it to take precedence. Because it is a soft limit, it does not guarantee that the container doesn't exceed the limit.
memory_swapNumberOptionalThe amount of memory this container is allowed to swap to disk. If memory and memory-swap are set to the same value, this prevents containers from using any swap. This is because memory-swap is the amount of combined memory and swap that can be used, while memory is only the amount of physical memory that can be used. Set to -1 to enable unlimited swap
memory_swappinessNumberOptionalTune container memory swappiness (0 to 100).
read_onlyBooleanOptionalIf true, the container will be started as read-only. Defaults to false.
userStringOptionalUser to run the container as (format: <name\|uid>[:<group\|gid>])
workdirStringOptionalThe working directory for commands to run in.

This step also supports the common step arguments and attributes.

Attributes (Read-Only)

AttributeTypeDescription
container_idStringThe container id
exit_codeNumberThe exit code from running the container
linesList of objectsOrdered list of all lines of output (stdout & stderr) from the container
stdoutStringSTDOUT output from the container
stderrStringSTDERR output from the container

lines

The lines attribute provides a list of all lines of output from the container, in order. lines is a list of objects, where each object has a stream (stdout or stdout) and a line (the line of output), for example:

[
{
stream = "stdout"
line = "This is the first"
},
{
stream = "stdout"
line = "block of STDOUT"
},
{
stream = "stderr"
line = "This is the first"
},
{
stream = "stderr"
line = "block of STDERR"
},
{
stream = "stdout"
line = "This is the second"
},
{
stream = "stdout"
line = "block of STDOUT"
},
]

Usage example:

pipeline "simple_pipe" {
step "container" "hello" {
image = "hello-world"
}
output "combined_lines" {
value = step.container.hello.lines[*].line
}
output "stdout_lines" {
value = [for v in step.container.hello.lines[*] : v.line if v.stream == "stdout"]
}
output "stderr_lines" {
value = [for v in step.container.hello.lines[*] : v.line if v.stream == "stderr"]
}
output "combined_text" {
value = join("\n",step.container.hello.lines[*].line)
}
output "stdout_text" {
value = join("\n",[for v in step.container.hello.lines[*] : v.line if v.stream == "stdout"])
}
output "stderr_text" {
value = join("\n",[for v in step.container.hello.lines[*] : v.line if v.stream == "stderr"])
}
}

Source v/s Image

You must specify either an image or a source but not both. The image will be pulled via standard docker client commands - The image must be publicly accessible or you must docker login to access a private repo. You may instead specify a source in which case a custom image will be built before the step is run. The image will be updated when anything in the source directory changes, and it will be deleted if the step is deleted.

Container Step using an Image

When using image, the image is pulled from a registry via standard docker client commands:

pipeline "another_pipe" {
step "container" "aws_s3_ls" {
image = "public.ecr.aws/aws-cli/aws-cli"
cmd = ["s3", "ls"]
env = {
AWS_ACCESS_KEY_ID = "AKIAFAKEKEY25WNAQKRC"
AWS_SECRET_ACCESS_KEY = "+bOnLljsFEZfakekeyJWQq7kkQaOsh5JbkTQH5YR"
}
}
output "buckets" {
value = step.container.aws_s3_ls.stdout
}
}

Container Step using Source

When using source, the path is relative to the mod location:

pipeline "another_pipe" {
step "container" "aws_s3_ls" {
source = "./my_aws"
cmd = ["s3", "ls"]
env = {
AWS_ACCESS_KEY_ID = "AKIAFAKEKEY25WNAQKRC"
AWS_SECRET_ACCESS_KEY = "+bOnLljsFEZfakekeyJWQq7kkQaOsh5JbkTQH5YR"
}
}
output "buckets" {
value = step.container.aws_s3_ls.stdout
}
}

Labels

Docker labels are automatically added to images, containers, volumes, etc using the standard format, including the standard org.opencontainers labels as well as Flowpipe-specific labels prefixed with io.flowpipe.

//container
"Labels": {
"io.flowpipe.name": "test_suite_mod.pipeline.with_functions.function.hello_nodejs_step",
"io.flowpipe.type": "container",
"org.opencontainers.container.created": "2023-10-12T04:29:32Z",
"org.opencontainers.image.created": "2023-10-12T04:29:29Z"
}