Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Improve][Sort] Add rate limit for ingesting into iceberg #7724

Closed
Tracked by #7747
thexiay opened this issue Mar 29, 2023 · 1 comment · Fixed by #7725
Closed
Tracked by #7747

[Improve][Sort] Add rate limit for ingesting into iceberg #7724

thexiay opened this issue Mar 29, 2023 · 1 comment · Fixed by #7725
Assignees
Milestone

Comments

@thexiay
Copy link
Contributor

thexiay commented Mar 29, 2023

InLong Component

InLong Sort

Movtivation

In the existing data synchronization, snapshot data and incremental data are send to kafka first, and then streaming write to Iceberg by Flink. Because the direct consumption of snapshot data will lead to problems such as high throughput and serious disorder (writing partition randomly), which will lead to write performance degradation and throughput glitches. It will always crash beacuse memory limit.e.g.

Proposed Changes

企业微信截图_16798860197817
Here we can found serious disorder and high throughput will

Rate limit

At this time, the write.rate.limit option can be turned on to ensure smooth writing, it will decrese throughput .

Memory and disk map

In the scene above,we found insertRowsMap occpy around 30% memory,it's a heap map ,if we replace it into a memory with disk map,it can reduce memory pressure.

Example Usage

It's ok to put rate limit parameters into table options or table parameters.
For example,you can create a memory catalog iceberg table in Flink SQL Client

CREATE TABLE inlong_iceberg13  (
    id bigint,
    name string,
    PRIMARY KEY(id) NOT ENFORCED
) WITH (
    'connector'='iceberg-inlong',
    'catalog-name'='hive_prod',
    'catalog-database'='inlong_db',
    'catalog-table'='inlong_table',
    'uri'='thrift://localhost:9083',
    'warehouse'='hdfs://localhost:8020/hive/warehouse',
    'write.rate.limit' = '1000'
);

so here 'write.rate.limit' = '1000' means when sink into this table , it can only consume 1000 records per second for all subtasks totally.

Also, you can create it in hive catalog with table parameters write.rate.limit

Error Handles

case1:

  1. Q:Will the triggering of chk be affected when the flow is limited? Will the downstream of the checkpoint barrier be unable to process for a long time and cause chk to time out??
    A:There will be some impact but not serious。

a. The flow of downstream processing data is limited, resulting in a backlog of buffers, resulting in back pressure, and the upstream source will delay or even suspend pulling data from the split.
b. The insertion of the barrier is to insert the barrier to all downstream nodes synchronously when the CheckpointEvent is executed in the mailbox.
c. The sending of the barrier takes the route of Event, it will not be suspended due to insufficient buffer memory, but it will still be ranked behind data
d. That is to say, when chk is triggered, before chk times out, at least the amount of data to be processed downstream is "backlog buffer in the network / average data volume per piece = total backlog number", as long as it can be processed within a limited time After completing so many backlogs, you can complete chk

Rollout/Adoption Plan

  1. Impelement rate limit.
  2. Rate limit can dynamic adjustment.
  3. Replace insertMap (determine whether to put it into position delete records in per checkpoint ) from heap map to rocksdb map.
@dockerzhang
Copy link
Contributor

Please use English for the image.

@thexiay thexiay changed the title [Improve][Sort] Improve memory stability of data ingesting into iceberg [Improve][Sort] Add rate limit for ingesting into iceberg Apr 12, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants