Dolphin Scheduler for Big Data
A distributed and easy-to-expand visual DAG workflow scheduling system. Dedicated to solving the complex dependencies in data processing, making the scheduling system
out of the box for data processing.
Its main objectives are as follows:
- Associate the Tasks according to the dependencies of the tasks in a DAG graph, which can visualize the running state of task in real time.
- Support for many task types: Shell, MR, Spark, SQL (mysql, postgresql, hive, sparksql), Python, Sub_Process, Procedure, etc.
- Support process scheduling, dependency scheduling, manual scheduling, manual pause/stop/recovery, support for failed retry/alarm, recovery from specified nodes, Kill task, etc.
- Support process priority, task priority and task failover and task timeout alarm/failure
- Support process global parameters and node custom parameter settings
- Support online upload/download of resource files, management, etc. Support online file creation and editing
- Support task log online viewing and scrolling, online download log, etc.
- Implement cluster HA, decentralize Master cluster and Worker cluster through Zookeeper
- Support online viewing of
Master/Workercpu load, memory
- Support process running history tree/gantt chart display, support task status statistics, process status statistics
- Support backfilling data
- Support multi-tenant
- Support internationalization
- There are more waiting partners to explore
What's in Dolphin Scheduler
|Stability||Easy to use||Features||Scalability|
|Decentralized multi-master and multi-worker||Visualization process defines key information such as task status, task type, retry times, task running machine, visual variables and so on at a glance.||Support pause, recover operation||support custom task types|
|HA is supported by itself||All process definition operations are visualized, dragging tasks to draw DAGs, configuring data sources and resources. At the same time, for third-party systems, the api mode operation is provided.||Users on DolphinScheduler can achieve many-to-one or one-to-one mapping relationship through tenants and Hadoop users, which is very important for scheduling large data jobs. "||The scheduler uses distributed scheduling, and the overall scheduling capability will increase linearly with the scale of the cluster. Master and Worker support dynamic online and offline.|
|Overload processing: Task queue mechanism, the number of schedulable tasks on a single machine can be flexibly configured, when too many tasks will be cached in the task queue, will not cause machine jam.||One-click deployment||Supports traditional shell tasks, and also support big data platform task scheduling: MR, Spark, SQL (mysql, postgresql, hive, sparksql), Python, Procedure, Sub_Process|
System partial screenshot
More documentation please refer to [DolphinScheduler online documentation]
Recent R&D plan
Work plan of Dolphin Scheduler: R&D plan, Under the
In Develop card is what is currently being developed, TODO card is to be done (including feature ideas)
How to contribute code
Welcome to participate in contributing code, please refer to the process of submitting the code: [How to contribute code]
Dolphin Scheduler uses a lot of excellent open source projects, such as google guava, guice, grpc, netty, ali bonecp, quartz, and many open source projects of apache, etc. It is because of the shoulders of these open source projects that the birth of the Dolphin Scheduler is possible. We are very grateful for all the open source software used! We also hope that we will not only be the beneficiaries of open source, but also be open source contributors. We also hope that partners who have the same passion and conviction for open source will join in and contribute to open source!
- Submit an issue
- Mail list: firstname.lastname@example.org. Mail to email@example.com, follow the reply to subscribe the mail list.
- Contact WeChat group manager, ID 510570367. This is for Mandarin(CN) discussion.
Please refer to LICENSE file.