Home > docs > processes v2 > Processes v2
Note: if you used Concord before, check the migration guide. It describes key differences between Concord flows v1 and v2.
Regardless of how the process starts – using a project and a Git repository or by sending a payload archive, Concord assumes a certain structure of the process’s working directory:
concord.yml
- a Concord DSL file containing the main flow,
configuration, profiles and other declarations;concord/**/*.concord.yml
- directory containing
extra Concord YAML files;forms
- directory with custom forms.Anything else is copied as-is and available for the process. Plugins can require other files to be present in the working directory.
The same structure should be used when storing your project in a Git repository.
Concord clones the repository and recursively copies the specified directory
path (/
by default which includes
all files in the repository) to the working directory for the process. If a
subdirectory is specified in the Concord repository’s configuration, any paths
outside the configuration-specified path are ignored and not copied. The repository
name it not included in the final path.
The default use case with the Concord DSL is to maintain everything in the one
concord.yml
file. The usage of a concord
folder and files within it allows
you to reduce the individual file sizes.
./concord/test.concord.yml
:
configuration:
arguments:
nested:
name: "stranger"
flows:
default:
- log: "Hello, ${nested.name}!"
./concord.yml
:
configuration:
arguments:
nested:
name: "Concord"
The above example prints out Hello, Concord!
, when running the default flow.
Concord folder merge rules:
concord/**/*.concord.yml
files in alphabetical order,
including subdirectories;The path to additional Concord files can be configured using the resources block.
Concord DSL files contain configuration, flows, profiles and other declarations.
The top-level syntax of a Concord DSL file is:
configuration:
...
flows:
...
publicFlows:
...
forms:
...
triggers:
...
profiles:
...
resources:
...
imports:
...
Let’s take a look at each section:
Flows listed in the publicFlows
section are the only flows allowed as
entry point values. This also limits the
flows listed in the repository run dialog. When the publicFlows
is omitted,
Concord considers all flows as public.
Flows from an imported repository are subject to the same
setting. publicFlows
defined in the imported repository are merged
with those defined in the main repository.
publicFlows:
- default
- enterHere
flows:
default:
- log: "Hello!"
- call: internalFlow
enterHere:
- "Using alternative entry point."
# not listed in the UI repository start popup
internalFlow:
- log: "Only callable from another flow."
Process arguments, saved process state and automatically provided variables are exposed as flow variables:
flows:
default:
- log: "Hello, ${initiator.displayName}"
In the example above the expression ${initator.displayName}
references an
automatically provided variable inititator
and retrieves it’s displayName
field value.
Flow variables can be defined in multiple ways:
Variables can be accessed using expressions, scripts or in tasks.
flows:
default:
- log: "All variables: ${allVariables()}"
- if: ${hasVariable('var1')}
then:
- log: "Yep, we got 'var1' variable with value ${var1}"
else:
- log: "Nope, we do not have 'var1' variable"
- script: javascript
body: |
var allVars = execution.variables().toMap();
print('Getting all variables in a JavaScript snippet: ' + allVars);
execution.variables().set('newVar', 'hello');
The allVariables
function returns a Java Map object with all current
variables.
The hasVariable
function accepts a variable name (as a string parameter) and
returns true
if the variable exists.
Concord automatically provides several built-in variables upon process execution in addition to the defined variables:
txId
- an unique identifier of the current process;parentInstanceId
- an identifier of the parent process;workDir
- path to the working directory of a current process;initiator
- information about the user who started a process:
initiator.username
- login, string;initiator.displayName
- printable name, string;initiator.email
- email address, string;initiator.groups
- list of user’s groups;initiator.attributes
- other LDAP attributes; for example
initiator.attributes.mail
contains the email address.currentUser
- information about the current user. Has the same structure
as initiator
;requestInfo
- additional request data (see the note below):
requestInfo.query
- query parameters of a request made using user-facing
endpoints (e.g. the portal API);requestInfo.ip
- client IP address, where from request is generated.requestInfo.headers
- headers of request made using user-facing endpoints.projectInfo
- project’s data:
projectInfo.orgId
- the ID of the project’s organization;projectInfo.orgName
- the name of the project’s organization;projectInfo.projectId
- the project’s ID;projectInfo.projectName
- the project’s name;projectInfo.repoId
- the project’s repository ID;projectInfo.repoName
- the repository’s name;projectInfo.repoUrl
- the repository’s URL;projectInfo.repoBranch
- the repository’s branch;projectInfo.repoPath
- the repository’s path (if configured);projectInfo.repoCommitId
- the repository’s last commit ID;projectInfo.repoCommitAuthor
- the repository’s last commit author;projectInfo.repoCommitMessage
- the repository’s last commit message.processInfo
- the current process’ information:
processInfo.activeProfiles
- list of active profiles used for the current
execution;processInfo.sessionToken
- the current process’
session token can be
used to call Concord API from flows.LDAP attributes must be allowed in the configuration.
Note: only the processes started using the browser link
provide the requestInfo
variable. In other cases (e.g. processes
triggered by GitHub) the variable might be undefined
or empty.
Availability of other variables and “beans” depends on the installed Concord plugins, the arguments passed in at the process invocation, and stored in the request data.
Concord has the ability to return process data when a process completes.
The names of returned variables should be declared in the configuration
section:
configuration:
out:
- myVar1
Output variables may also be declared dynamically using multipart/form-data
parameters if allowed in a Project’s configuration. CAUTION: this is a not
secure if secret values are stored in process variables
$ curl ... -F out=myVar1 https://concord.example.com/api/v1/process
{
"instanceId" : "5883b65c-7dc2-4d07-8b47-04ee059cc00b"
}
Retrieve the output variable value(s) after the process finishes:
# wait for completion...
$ curl .. https://concord.example.com/api/v2/process/5883b65c-7dc2-4d07-8b47-04ee059cc00b
{
"instanceId" : "5883b65c-7dc2-4d07-8b47-04ee059cc00b",
"meta": {
out" : {
"myVar1" : "my value"
},
}
}
It is also possible to retrieve a nested value:
configuration:
out:
- a.b.c
flows:
default:
- set:
a:
b:
c: "my value"
d: "ignored"
$ curl ... -F out=a.b.c https://concord.example.com/api/v1/process
In this example, Concord looks for variable a
, its field b
and
the nested field c
.
Additionally, the output variables can be retrieved as a JSON file:
$ curl ... https://concord.example.com/api/v1/process/5883b65c-7dc2-4d07-8b47-04ee059cc00b/attachment/out.json
{"myVar1":"my value"}
Any value type that can be represented as JSON is supported.
The dry-run mode allows you to execute a process without making any dangerous side-effects. This is useful for testing and validating the flow logic before running it in production.
Note that correctness of the flow execution in dry-run mode depend on how tasks and scripts handle dry-run mode in your flow. Make sure all tasks and scripts involved are properly handled dry-run mode to prevent unintended side effects
To enable dry-run mode, set the dryRun
flag to true
in the process request:
curl ... -FdryRun=true -F out=myVar1 https://concord.example.com/api/v1/process
When the process is launched in dry-run mode, the system
log segment of the process will include
the following line:
Dry-run mode: enabled
Standard Concord tasks support dry-run mode and will not make any changes outside the process.
For example, the http
task will not make any non-GET requests in dry-run mode, the s3
task will
not actually upload files in dry-run mode, etc.
If a task does not support dry-run mode, the process will terminate with the following error:
Dry-run mode is not supported for '<task-name>' task (yet)
If a task does not support dry-run mode, but you are confident that it can be executed in dry-run mode, you can mark the task step as ready for dry-run mode:
flows:
myFlow:
- task: "myTaskAndImSureWeCanExecuteItInDryRunMode"
meta:
dryRunReady: true # dry-run ready marker for this step
Important: Use the
meta.dryRunReady
only if you are certain that the task is safe to run in dry-run mode and cannot be modified to support it explicitly.
To add dry-run mode support to a task, see the task documentation.
By default, script steps do not support dry-run mode and the process will terminate with the following error:
Dry-run mode is not supported for this 'script' step
To add dry-run mode support to a script, see the scripting documentation.