Make data readily accessible and freely transferable

DataPipeline, an enterprise-grade real-time data integration platform, is built on the DataOps philosophy and independently developed to support unified batch and streaming processing as well as diverse and heterogeneous data infrastructures. By continuously building end-to-end data pipelines, it enables users to focus on unlocking data value, seize market opportunities, rapidly gain competitive advantage, and drive digital innovation.

Adhering to the DataOps philosophy

DataPipeline focuses on the full spectrum of organizational data management, integrating considerations of both enterprise management characteristics and data technology advancements into its product development. It supports enterprises in achieving integrated innovation across data development, operations, and management, effectively addressing key challenges in the data integration process—such as high reliance on manual work, poor team collaboration, slower-than-expected response to demands, and complex, time-consuming management.

A unified batch and stream data integration platform

Provides enterprise users with a unified platform to manage real-time synchronization and batch processing tasks across heterogeneous data nodes.

Comprehensive data node support

Supports comprehensive data node types and rich, automated and accurate semantic mapping construction between heterogeneous data. Covers data types such as relational databases, NoSQL databases, big data platforms, homegrown databases, message queues, file systems, APIs, and so on.

Change data capture

The log-based incremental data acquisition technology enables the capture of change data in seconds (Change Data Capture, CDC), and transmits the parsed change records to the data destination.

Change data capture
automatic mapping

Supports one-to-many and many-to-one data mapping and rich semantic mapping strategies between heterogeneous data. Default semantic mapping rules configuration, automated matching, to meet the high complexity and large data volume of various data synchronization scenarios.

automatic mapping
DDL Adaptive

Responds to data source structure changes that occur during task execution through structure change policies. It supports fully automatic pushing of changes in table structure definition to downstream systems, and can automatically synchronize the added, deleted, and modified fields of the database table structure at the source end to the target end to guarantee data consistency.

DDL Adaptive
All-link data observation

As a “management cockpit” for data tasks, you can have a detailed view of each data task synchronization. Upstream and downstream data changes and abnormalities can be monitored in real time, ensuring that you are in control of the status of data tasks such as activation, operation, suspension, error reporting and restart.

All-link data observation
distributed architecture

The fusion engine adopts a distributed architecture and containerized deployment, which can ensure the system business continuity requirements. When a node in the cluster fails, tasks are rebalanced to healthy nodes.

distributed architecture
Enterprise level management

The platform provides enterprise-level management capabilities such as user management, single sign-on, and system operation logs to ensure compliance with overall enterprise management requirements.

No-code, the data pipeline can be deployed and used within 5 minutes.

Without code, you can synchronize comprehensive data to your destination in real time and control the status of your data tasks in real time. Provide accurate and timely data for enterprise real-time wind control, customer management, production management, operation management and decision-making, etc., so as to help enterprises get a head start in the new competition.

  • 01 Select data nodes
  • 02 Create data task
  • 03 Obtain data in real time
联系我们