Introduction: Tool for performing shell-like programing in Kotlin
More: Author   ReportBugs   


Kotlin Shell is a prototype tool for performing shell programming in Kotlin and Kotlin script. It provides shell-like API which takes advantage of Kotlin features.

For examples go to the examples section.


Creating processes is extremly easy in Kotlin Shell:

shell {
  "echo hello world"()

// echo hello world

Piping is also supported:

shell {
  val toUpper = stringLambda { it.toUpper() to "" }
  pipeline { file("data.txt") pipe "grep abc".process() pipe toUpper }

// cat data.txt | grep abc | tr '[:lower:]' '[:upper:]'



The library is designed primary for Unix-like operating systems and was tested fully on MacOS. Windows support is not planned at the moment.

how to get Kotlin Shell

Kotlin Shell is distributed via GitHub Packages.

for scripting

Maven Central

Use kshell command for running scripts from command line. To read more about it and download the command go here.

You can also download binaries of kotlin-shell-kts to use the script definition in custom way.

as library

Maven Central


repositories {
  dependencies {

For more information about using GitHub packages with Gradle go here or to packages section of thi repository.

Kotlin Shell features slf4j logging. To use it add logging implementation or NOP logger to turn it off:


You can also download binaries of kotlin-shell-core to use the library in any other project.

how to run the scripts

in Kotlin script

Kotlin Shell scripts have sh.kts extension.

Some environment variables may be set to customize script execution. Go to the environment section to learn more.

kshell command

command line

To run the script type:

kshell script.sh.kts

Read more and download the command here.


Kotlin Shell scripts support shebang:

#!/usr/bin/env kshell

kotlinc command

A more low level approach is supported with kotlinc:

kotlinc -cp PATH_TO_SHELL_KTS_ALL_JAR -Dkotlin.script.classpath  -script SCRIPT.sh.kts ARGS


kotlinc -cp lib/kotlin-shell-kts-all.jar -Dkotlin.script.classpath  -script hello.sh.kts

in Kotlin programs

Calling the shell block will provide access to Kotlin Shell API:

shell {
  // code


writing scripts

Kotlin Shell is driven by kotlinx.io and kotlinx.coroutines. Therefore the API is fully non-blockign and most functions are suspending. To take advantage of that, you need to pass the script as suspend fun and CoroutineScope as parameters to the suspending shell block.

in Kotlin code

With given scope:

shell (
    scope = myScope
) {
    "echo hello world!"()

With new coroutine scope:

shell {
    "echo hello world!"()

in Kotlin Script

blocking api

The blocking api features basic shell commands without the need of wrapping it into coroutines calls:

"echo hello world!".process().run()

It can be accessed in Kotlin code as well by using ScriptingShell class.

non blocking api

The shell block gives access to full api of kotlin-shell. It receives GlobalScope as implicit parameter:

shell {
    "echo hello world!"()


creating and starting processes

Before starting any process you need to create ProcessExecutable. Then you can start it directly or use it in pipeline.

system process

To start new system process use dsl:

val echo = systemProcess {
    cmd { "echo" withArg "hello" }

or extensions:

val echo = "echo hello".process() 

or simply:

"echo hello"()

To start process from file contents use File.process() extension:

val process = scriptFile.process(arg1, arg2)

or simply:

scriptFile(arg1, arg2)
kts process

creating virtual KotlinScript processes is not implemented yet

multiple calls to processes

To run equivalent process multiple times call ProcessExecutable.copy()


Pipelines can operate on processes, lambdas, files, strings, byte packages and streams.

piping overview

piping introduction

Every executable element in Kotlin Shell receives its own ExecutionContext, which consist of stdin, stdout and stderr implemented as Channels. In the library channels are used under aliases ProcessChannel, ProcessSendChannel and ProcessReceiveChannel their unit is always ByteReadPacket. Shell itself is an ExecutionContext and provides default channels:

  • stdin is always empty and closed ProcessReceiveChannel, which effectively acts like /dev/null. It It can be accessed elsewhere by nullin member.
  • stdout is a rendezvous ProcessSendChannel, that passes everything to System.out.
  • stderr is a reference to stdout.

Beside them there is also special member ProcessSendChannel called nullout, which acts like /dev/null.

Pipeline elements are connected by ProcessChannels, that override their context's default IO. Only the neccessary streams are overriden, so not piped ones are redirected to the channels, that came with the context. Each element in the pipeline ends its execution after processing the last received packet before receiving close signal from stdin channel.

Pipelines are logically divided into three parts: FROM, THROUGH and TO. The api is designed to look seamless, but in order to take full advantage of piping it is necessary to distinguish these parts. Every element can emit some output, but doesn't have to. They also shouldn't close they outputs after execution. It is done automatically by piping engine and ensures that channels used by other entities (such as stdout) won't be closed.

Every pipeline starts with single element FROM section. It can be Process, lambda, File, String, InputStream, ByteReadPacket or Channel. Elements used here receive no input (for processes and lambdas there is nullin provided). Then the THROUGH or TO part occurs. Piping THROUGH can be performed on Process or lambda and can consist of any number of elements. They receive the input simutanously while the producer is going (due to the limitations of zt-exec library SystemProcess may wait till the end of input) and can emit output as they go. Every pipeline is ended with single element TO section. Elements here take input, but do not emit any output. If no TO element is provided, the pipeline builder will implicitly end the pipeline with stdout.

piping grammar

Schematic grammar for piping could look like this:


creating pipeline

To construct and execute the pipeline use pipeline builder:

pipeline { a pipe b pipe c }

Pipeline can be started with Process, lambda, File, String, ByteReadPacket or InputStream. Once the pipeline is created it cannot be modified.

The pipeline builder takes an optional parameter mode of type ExecutionMode. It can be used for detaching or demonizing the pipeline. By default it uses ExecutionMode.ATTACHED

pipeline (ExecutionMode.ATTACHED) { a pipe b pipe c }
pipeline (ExecutionMode.DETACHED) { a pipe b pipe c }
pipeline (ExecutionMode.DAEMON) { a pipe b pipe c }

Constructed pipeline can be stored in an object of Pipeline type:

val p = pipeline { a pipe b }

You can perform several operiations on it:

  • Pipeline.join() joins the pipeline
  • Pipeline.kill() kill all elements of the pipeline

And access processes member, which is a list of all processes in the pipeline.

forking stderr

To fork stderr from process or lambda use forkErr:

pipeline { a pipe (b forkErr { /* fork logic */ }) pipe c }

it redirects elements error stream to given pipeline.

The builder function receives the new error ProcessReceiveChannel as an implicit argument. The function should return new Pipeline. If this pipeline wont be ended with TO, it will implicitly be appended with stdout.

The fork logic can be stored in a variable:

val fork = pipelineFork { it pipe filter pipe file }
pipeline { a forkErr fork }

The fork belongs to process executable or lambda itself so it can be used outside pipeline as well:

val process = "cmd arg".process() forkErr { /* fork */ }
val lambda = stringLambda { /* lambda */ } forkErr { /* fork */ }
pipeline { lambda pipe { /* ... */} }

As a shorthand it is possible to fork error directly to given channel:

val channel: ProcessChannel = Channel()
val b = a forkErr channel

lambdas in pipelines

Basic lambda structure for piping is PipelineContextLambda:

suspend (ExecutionContext) -> Unit

It takes context which consists of stdin, stdout and stderr channels. It can receive content immediately after it was emitted by the producer, as well as its consumer can receive sent content simultaneously.

The end of input is signalized with closed stdin. PipelineContextLambda shouldn't close outputs after execution.

lambdas suitable for piping

There are several wrappers for PipelineContextLambda, that can make piping easier. Most of them follow the template (stdin) -> Pair<stdout, stderr>

name definition builder
PipelineContextLambda suspend (ExecutionContext) -> Unit contextLambda { }
PipelinePacketLambda suspend (ByteReadPacket) -> Pair packetLambda { }
PipelineByteArrayLambda suspend (ByteArray) -> Pair byteArrayLambda { }
PipelineStringLambda suspend (String) -> Pair stringLambda { / code / }
PipelineStreamLambda suspend (InputStream, OutputStream, OutputStream) -> Unit streamLambda { }
shell {
    val upper = stringLambda { line ->
        line.toUpperCase() to ""
    pipeline { "cat file".process() pipe upper pipe file("result") }


detaching overview

Detached process or pipeline is being executed asynchronous to the shell. It can be attached or awaited at any time. Also all of not-ended detached jobs will be awaited after the end of the script before finishing shell block.

detaching process

To detach process use detach() function:

val echo = "echo hello world!".process()

To join process use Process.join() method:


You can perform these operations also on multiple processes:

detach(p1, p2, p2)
await(p1, p2, p3)

To join all processes use joinAll().

To access detached processes use detachedProcesses member. It stores list of pair of detached job id to process

detaching pipeline

To detach pipeline use detach() builder:

detach { p1 pipe p2 pipe p3 }

or pipeline() with correct mode:

pipeline (ExecutionMode.DETACHED) { p1 pipe p2 pipe p3 }

To join pipeline call Pipeline.join():

val pipeline = detach { p1 pipe p2 pipe p3 }

To access detached processes use detachedPipelines member. It stores list of pair of detached job id to pipeline


To attach detached job (process or pipeline) use fg():

  • fg(Int) accepting detached job id. By default it will use 1 as id.
  • fg(Process) accepting detached process
  • fg(Pipeline) accepting detached pipeline

To join all detached jobs call joinDetached()


At the current stage demonizing processes and pipelines is implemented in very unstable and experimental way.

Though it should not be used.


Environment in Kotlin Shell is divided into two parts shell environment and shell variables. The environment from system is also copied.

To access the environment call:

  • environment list or env command for shell environment
  • variables list for shell variables
  • shellEnv or set command for combined environment
  • systemEnv for the environment inherited from system

system environment

system environment is copied to shell environment at its creation. To access system environment any time call systemEnv

shell environment

shell environment is copied to Shell from the system. It can be modified and is copied to sub shells.

To set environment use export:

export("KEY" to "VALUE")

To make it read-only add readonly:

readonly export("KEY" to "VALUE")

To print environment variable use env:


To remove use unset:


shell variables

shell variables are empty by default. They can be modified and are not copied to sub shells

To set variable use variable:

variable("KEY" to "VALUE")

To make it read-only add readonly:

readonly variable("KEY" to "VALUE")

To print shell variable use env:


To remove variable use unset:


special variables

Kotlin Shell uses some special variables for customisation of execution. They can be set explicitly by shell builders or can be inherited from system. If any of these will not be set, default values will be used.

variable type usage default value
SYSTEM_PROCESS_INPUT_STREAM_BUFFER_SIZE Int size of SystemProcessInputStream buffer 16
PIPELINE_RW_PACKET_SIZE Long maximal size of packets used in piping 16
PIPELINE_CHANNEL_BUFFER_SIZE Int size of ProcessChannels used in piping 16
REDIRECT_SYSTEM_OUT YES/NO Specifies weather System.out should be bypassed with Shell.stdout. As a result it will synchronize stdlib print() and println() with shell outputs YES

shell commands

Kotlin Shell implements some of the most popular shell commands with additions of special methods and properties.

To call the command use invoke():


then its output will be processed to stdout.

To pipe the command put simply put it in the pipeline:

pipeline { cmd pipe process }

implemented shell commands

  • & as detach
  • cd with cd(up) for cd .. and cd(pre) for cd -
  • env
  • exit as return@shell
  • export
  • fg
  • jobs
  • mkdir
  • print and echo as print()/println()
  • ps
  • readonly
  • set
  • unset
  • setting shell variable as variable

shell methods

Shell member functions provide easy ways for performing popular shell tasks:

  • file() - gets or creates file relative to current directory

custom shell commands

To implement custom shell command create extension member of Shell class and override its getter:

val Shell.cmd: ShellCommand
    get() = command {
        /* command implementation returning String */

such command can be declared outside shell block and be used as dependency.

custom shell methods

To implement custom shell method use the basic function template:

suspend fun Shell.() -> T

where T is desired return type or Unit. Such functions can be declared outside shell block and be used as dependency.

special properties

Shell members provide easy Kotlin-like access to popular parameters:

  • detachedPipelines
  • detachedProcesses
  • directory
  • environment
  • nullin
  • nullout
  • processes
  • shellEnv
  • systemEnv
  • variables

sub shells

creating sub shells

To create sub shell use shell block:

shell {
    /* code */
    shell {
        /* code */

By default sub shell will inherit environment, directory, IO streams and constants. You can explicitly specify shell variables and directory to use:

shell {
    shell (
        vars = mapOfVariables,
        dir = directoryAsFile
    ) {
        /* code */

Sub shells suspend execution of the parent shell.

sub shells use cases

Sub shell can be used to provide custom environment for commands:

shell {
  export("KEY" to "ONE")
  shell (
    vars = mapOf("KEY" to "TWO")
  ) {
    "echo ${env("KEY")} // TWO

  // rest of the script

Or to temporarly change the directory:

shell {
  "echo ${env("PWD")} // ../dir

  shell (
    dir = file("bin")
  ) {
    "echo ${env("PWD")} // ../dir/bin

  // rest of the script

scripting specific features


Kotlin Shell scripts support external and internal dependencies. The mechanism from kotlin-main-kts is being used. Learn more about it in KEEP and blog post.

external dependencies

External dependencies from maven repositories can be added via @file:Repository @file:DependsOn annotation:


then they can be imported with standard import statement.

internal dependencies

To import something from local file use @file:Import:


then they can be imported with standatd import statement.


Examples on writing Kotlin shell scripts can be found in the examples repository.

A good source of detailed examples are also integration tests in this repository.

About Me
GitHub: Trinea
Facebook: Dev Tools