Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[chore](sql) Forbid show hidden columns and create table with hidden column (#38796) #38923

Closed
wants to merge 3,468 commits into from

Conversation

924060929
Copy link
Contributor

cherry pick from #38796

kaijchen and others added 30 commits July 16, 2024 22:32
## Proposed changes

Issue Number: close #xxx

<!--Describe your changes.-->
## Proposed changes

cherry pick apache#37906 

<!--Describe your changes.-->
…apache#37958)

bp apache#37249

Co-authored-by: slothever <18522955+wsjz@users.noreply.github.com>
…apache#37969)

## Proposed changes

Issue Number: close #xxx

<!--Describe your changes.-->
…he#37967)

bp apache#37930 
## Proposed changes

Issue Number: close #xxx

<!--Describe your changes.-->
…chema three replica (apache#36130) (apache#37961)

bp apache#36130

Co-authored-by: HB <137497191@qq.com>
Co-authored-by: camby <104178625@qq.com>
…ter TTL (apache#37288) (apache#37983)

pick (apache#37288)

When using routine load, After the data load is completed, the lag is
still a positive number:
```
  Lag: {"0":16,"1":15,"2":16,"3":16,"4":16,"5":16,"6":15,"7":16,"8":16,"9":16,"10":15,"11":16,"12":15,"13":15,"14":16,"15":16,"16":17,"17":15,"18":16,"19":15,"20":16,"21":16,"22":16,"23":16,"24":15,"25":17,"26":17,"27":16,"28":16,"29":16,"30":16,"31":17,"32":14,"33":16,"34":17,"35":16,"36":15,"37":15,"38":15,"39":16,"40":16,"41":16,"42":15,"43":15,"44":17,"45":16,"46":15,"47":15,"48":16,"49":17,"50":16,"51":15,"52":16,"53":15,"54":15,"55":17,"56":16,"57":17,"58":16,"59":16,"60":15,"61":15,"62":16,"63":16,"64":17,"65":16,"66":15,"67":16,"68":17,"69":16,"70":15,"71":17}
```
and the routing load is paused when the Kafka data reaches TTL and is
deleted, the error is `out of range`.

The reason why this happened is EOF has it offset which needed
statistics.

**note(important):**
After the bug is fixed, if you set 
```
"property.enable.partition.eof" = "false"
```
in your routine load job, it will meet the problem. For EOF has offset,
and the config is true in Doris default.
## Proposed changes

pick from master apache#37796

<!--Describe your changes.-->
…7988)

pick from master apache#37720

support hint use parameter without key, like:

```sql
SELECT /*+ query_timeout(3000) */ * FROM t;
```
…apache#37889) (apache#38013)

pick from master apache#37889

we use unscaled value of BigDecimal in tablet prune. So we need to
ensure BigDecimal's precision and scale is same with the literal
contains it.
…pache#38016)

## Proposed changes

pick from apache#37788

Co-authored-by: zhongjian.xzj <zhongjian.xzj@zhongjianxzjdeMacBook-Pro.local>
… columns without repeated deserialization. (apache#37377)" (apache#38007)

Reverts apache#37530
Need more test, revert it temporarily
…e dropped (apache#37809) (apache#38024)

## Proposed changes
pick from apache#37809 
Issue Number: close #xxx

<!--Describe your changes.-->
… value (apache#37996)

pick from master apache#37932

## Proposed changes

Issue Number: close #xxx

<!--Describe your changes.-->
default value `enable_fallback_to_original_planner` is true, resource
priv check only support in nereids,so need set
enable_fallback_to_original_planner=false;
wyxxxcat and others added 21 commits August 5, 2024 16:04
…#38871)

## Proposed changes

pick from master apache#38080

<!--Describe your changes.-->
## Proposed changes

Now we use `query_timeout` to set a timeout value for queries. But for
pipelineX engine, Doris do not use it so each query will not end before
EOS. This PR fix it.

pick apache#35328

<!--Describe your changes.-->
…eberg writer. (apache#38902)

## Proposed changes
[Fix] (multi-catalog) Fix not throw error when call close() in
hive/iceberg writer.

When the file writer closes(), it will sync buffer to commit. Therefore,
sometimes data is written only when close() is called, which can expose
some errors. For example, hdfs_file_writer. Therefore, this error needs
to be captured in the entire close process.
…column (apache#38796)

Forbid show hidden columns and create table with hidden column

(cherry picked from commit 9eae4ba)
@doris-robot
Copy link

Thank you for your contribution to Apache Doris.
Don't know what should be done next? See How to process your PR

Since 2024-03-18, the Document has been moved to doris-website.
See Doris Document.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.