400 606 5609
Experience DEMO
DataPipeline, an enterprise-grade real-time data integration platform, is built on the DataOps philosophy and independently developed to support unified batch and streaming processing as well as diverse and heterogeneous data infrastructures. By continuously building end-to-end data pipelines, it enables users to focus on unlocking data value, seize market opportunities, rapidly gain competitive advantage, and drive digital innovation.


DataPipeline focuses on the full spectrum of organizational data management, integrating considerations of both enterprise management characteristics and data technology advancements into its product development. It supports enterprises in achieving integrated innovation across data development, operations, and management, effectively addressing key challenges in the data integration process—such as high reliance on manual work, poor team collaboration, slower-than-expected response to demands, and complex, time-consuming management.

Configurable link definition, no-code task construction, rich policies to cope with complex scenario requirements, greatly shortening the data delivery cycle.

Hierarchical management of all aspects of real-time data management to improve the efficiency of cooperation between different roles.

Supports customizable data nodes for secondary development and full connectivity to data silos.

A four-level data state and health observation system for containers, applications, threads, and services, and a panoramic cockpit to guard the stable operation of tasks.

Provides enterprise users with a unified platform to manage real-time synchronization and batch processing tasks across heterogeneous data nodes.

Support databases including Oracle, IBM DB2, MySQL, MS SQL Server, PostgreSQL, GoldenDB, TDSQL, OceanBase, MongDB, etc. to get accurate incremental data by parsing logs.

It can achieve batch data transmission between any structured and semi-structured data nodes, and supports scenarios such as data warehousing/hub, database migration, and data cloud migration.

The platform enables unified and centralized management, supporting both streaming and batch data processing. It allows data synchronization through methods such as incremental sync, full sync, and scheduled sync.

Supports comprehensive data node types and rich, automated and accurate semantic mapping construction between heterogeneous data. Covers data types such as relational databases, NoSQL databases, big data platforms, homegrown databases, message queues, file systems, APIs, and so on.

The log-based incremental data acquisition technology enables the capture of change data in seconds (Change Data Capture, CDC), and transmits the parsed change records to the data destination.


Supports one-to-many and many-to-one data mapping and rich semantic mapping strategies between heterogeneous data. Default semantic mapping rules configuration, automated matching, to meet the high complexity and large data volume of various data synchronization scenarios.


Responds to data source structure changes that occur during task execution through structure change policies. It supports fully automatic pushing of changes in table structure definition to downstream systems, and can automatically synchronize the added, deleted, and modified fields of the database table structure at the source end to the target end to guarantee data consistency.


As a “management cockpit” for data tasks, you can have a detailed view of each data task synchronization. Upstream and downstream data changes and abnormalities can be monitored in real time, ensuring that you are in control of the status of data tasks such as activation, operation, suspension, error reporting and restart.

The fusion engine adopts a distributed architecture and containerized deployment, which can ensure the system business continuity requirements. When a node in the cluster fails, tasks are rebalanced to healthy nodes.

The platform provides enterprise-level management capabilities such as user management, single sign-on, and system operation logs to ensure compliance with overall enterprise management requirements.

The platform offers user management functions such as administrator users, user groups, and permission management, enabling the allocation of permissions among different user groups and different user roles, as well as the segregation of task permissions.

The platform supports integration with enterprise-level single sign-on systems, enabling the implementation of single sign-on functionality.

The platform provides log information such as high-risk operation logs, system configuration change logs, system login audit logs, system-level file management logs, and system-level error logs.

Without code, you can synchronize comprehensive data to your destination in real time and control the status of your data tasks in real time. Provide accurate and timely data for enterprise real-time wind control, customer management, production management, operation management and decision-making, etc., so as to help enterprises get a head start in the new competition.