Linkis builds a layer of computation middleware between upper applications and underlying engines. By using standard interfaces such as REST/WS/JDBC provided by Linkis, the upper applications can easily access the underlying engines such as MySQL/Spark/Hive/Presto/Flink, etc., and achieve the intercommunication of user resources like unified variables, scripts, UDFs, functions and resource files at the same time.
As a computation middleware, Linkis provides powerful connectivity, reuse, orchestration, expansion, and governance capabilities. By decoupling the application layer and the engine layer, it simplifies the complex network call relationship, and thus reduces the overall complexity and saves the development and maintenance costs as well.
Since the first release of Linkis in 2019, it has accumulated more than 700 trial companies and 1000+ sandbox trial users, which involving diverse industries, from finance, banking, tele-communication, to manufactory, internet companies and so on. Lots of companies have already used Linkis as a unified entrance for the underlying computation and storage engines of the big data platform.
-
Support for diverse underlying computation storage engines.
Currently supported computation/storage engines: Spark, Hive, Flink, Python, Pipeline, Sqoop, openLooKeng, JDBC, Shell, etc.
Computation/storage engines to be supported: Presto (planned 1.2.0), ElasticSearch (planned 1.2.0), etc. Supported scripting languages: SparkSQL, HiveQL, Python, Shell, Pyspark, R, Scala and JDBC, etc. -
Powerful task/request governance capabilities. With services such as Orchestrator, Label Manager and customized Spring Cloud Gateway, Linkis is able to provide multi-level labels based, cross-cluster/cross-IDC fine-grained routing, load balance, multi-tenancy, traffic control, resource control, and orchestration strategies like dual-active, active-standby, etc.
-
Support full stack computation/storage engine. As a computation middleware, it will receive, execute and manage tasks and requests for various computation storage engines, including batch tasks, interactive query tasks, real-time streaming tasks and storage tasks;
-
Resource management capabilities. ResourceManager is not only capable of managing resources for Yarn and Linkis EngineManger, but also able to provide label-based multi-level resource allocation and recycling, allowing itself to have powerful resource management capabilities across mutiple Yarn clusters and mutiple computation resource types;
-
Unified Context Service. Generate Context ID for each task/request, associate and manage user and system resource files (JAR, ZIP, Properties, etc.), result set, parameter variable, function, etc., across user, system, and computing engine. Set in one place, automatic reference everywhere;
-
Unified materials. System and user-level unified material management, which can be shared and transferred across users and systems.
Engine Name | Suppor Component Version (Default Dependent Version) |
Linkis Version Requirements | Included in Release Package By Default |
Description |
---|---|---|---|---|
Spark | Apache 2.0.0~2.4.7, CDH >= 5.4.0, (default Apache Spark 2.4.3) |
>=1.0.3 | Yes | Spark EngineConn, supports SQL , Scala, Pyspark and R code |
Hive | Apache >= 1.0.0, CDH >= 5.4.0, (default Apache Hive 2.3.3) |
>=1.0.3 | Yes | Hive EngineConn, supports HiveQL code |
Python | Python >= 2.6, (default Python2*) |
>=1.0.3 | Yes | Python EngineConn, supports python code |
Shell | Bash >= 2.0 | >=1.0.3 | Yes | Shell EngineConn, supports Bash shell code |
JDBC | MySQL >= 5.0, Hive >=1.2.1, (default Hive-jdbc 2.3.4) |
>=1.0.3 | No | JDBC EngineConn, already supports MySQL and HiveQL, can be extended quickly Support other engines with JDBC Driver package, such as Oracle |
Flink | Flink >= 1.12.2, (default Apache Flink 1.12.2) |
>=1.0.3 | No | Flink EngineConn, supports FlinkSQL code, also supports starting a new Yarn in the form of Flink Jar Application |
Pipeline | - | >=1.0.3 | No | Pipeline EngineConn, supports file import and export |
openLooKeng | openLooKeng >= 1.5.0, (default openLookEng 1.5.0) |
>=1.1.1 | No | openLooKeng EngineConn, supports querying data virtualization engine with Sql openLooKeng |
Sqoop | Sqoop >= 1.4.6, (default Apache Sqoop 1.4.6) |
>=1.1.2 | No | Sqoop EngineConn, support data migration tool Sqoop engine |
Impala | Impala >= 3.2.0, CDH >=6.3.0 | ongoing | - | Impala EngineConn, supports Impala SQL code |
Presto | Presto >= 0.180 | ongoing | - | Presto EngineConn, supports Presto SQL code |
ElasticSearch | ElasticSearch >=6.0 | ongoing | - | ElasticSearch EngineConn, supports SQL and DSL code |
MLSQL | MLSQL >=1.1.0 | ongoing | - | MLSQL EngineConn, supports MLSQL code. |
Hadoop | Apache >=2.6.0, CDH >=5.4.0 |
ongoing | - | Hadoop EngineConn, supports Hadoop MR/YARN application |
TiSpark | 1.1 | ongoing | - | TiSpark EngineConn, supports querying TiDB with SparkSQL |
Component | Description | Linkis 1.x(recommend 1.1.1) Compatible |
---|---|---|
DataSphereStudio | DataSphere Studio (DSS for short) is WeDataSphere, a one-stop data application development management portal. | DSS 1.0.1[released][Linkis recommend 1.1.1] |
Scriptis | Support online script writing such as SQL, Pyspark, HiveQL, etc., submit to Linkis to perform data analysis web tools. | In DSS 1.0.1[released] |
Schedulis | Workflow task scheduling system based on Azkaban secondary development, with financial-grade features such as high performance, high availability and multi-tenant resource isolation. | Schedulis0.6.2 [released] |
Qualitis | Data quality verification tool, providing data verification capabilities such as data integrity and correctness | Qualitis 0.9.1 [released] |
Streamis | Streaming application development management tool. It supports the release of Flink Jar and Flink SQL, and provides the development, debugging and production management capabilities of streaming applications, such as: start-stop, status monitoring, checkpoint, etc. | Streamis 0.1.0 [released][Linkis recommend 1.1.0] |
Exchangis | A data exchange platform that supports data transmission between structured and unstructured heterogeneous data sources, the upcoming Exchangis1. 0, will be connected with DSS workflow | Exchangis 1.0.0 [developing] |
Visualis | A data visualization BI tool based on the second development of Davinci, an open source project of CreditEase, provides users with financial-level data visualization capabilities in terms of data security. | Visualis 1.0.0[developing] |
Prophecis | A one-stop machine learning platform that integrates multiple open source machine learning frameworks. Prophecis' MLFlow can be connected to DSS workflow through AppConn. | Prophecis 0.3.0 [released] |
Please go to the Linkis Releases Page to download a compiled distribution or a source code package of Linkis.
For more detailed guidance see: [Backend Compile] [Management Console Build]
## compile backend
### Mac OS/Linux
./mvnw -N install
./mvnw clean install -Dmaven.javadoc.skip=true -Dmaven.test.skip=true
### Windows
mvnw.cmd -N install
mvnw.cmd clean install -Dmaven.javadoc.skip=true -Dmaven.test.skip=true
## compile web
cd incubator-linkis/web
npm install
npm run build
Please refer to Quick Deployment to do the deployment.
- The documentation of linkis is in Linkis-Website Git Repository.
- Meetup videos on Bilibili.
Linkis services could be divided into three categories: computation governance services, public enhancement services and microservice governance services.
- The computation governance services, support the 3 major stages of processing a task/request: submission -> preparation -> execution;
- The public enhancement services, including the material library service, context service, and data source service;
- The microservice governance services, including Spring Cloud Gateway, Eureka and Open Feign.
Below is the Linkis architecture diagram. You can find more detailed architecture docs in Linkis-Doc/Architecture.
Based on Linkis the computation middleware, we've built a lot of applications and tools on top of it in the big data platform suite WeDataSphere. Below are the currently available open-source projects. More projects upcoming, please stay tuned.
Contributions are always welcomed, we need more contributors to build Linkis together. either code, or doc, or other supports that could help the community.
For code and documentation contributions, please follow the contribution guide.
- Any questions or suggestions please kindly submit an issue.
- By mail [email protected]
- You can scan the QR code below to join our WeChat group to get more immediate response.
We opened an issue [Who is Using Linkis] for users to feedback and record who is using Linkis.
Since the first release of Linkis in 2019, it has accumulated more than 700 trial companies and 1000+ sandbox trial users, which involving diverse industries, from finance, banking, tele-communication, to manufactory, internet companies and so on.