You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Licensed to the Apache Software Foundation (ASF) under one
3
+
or more contributor license agreements. See the NOTICE file
4
+
distributed with this work for additional information
5
+
regarding copyright ownership. The ASF licenses this file
6
+
to you under the Apache License, Version 2.0 (the
7
+
"License"); you may not use this file except in compliance
8
+
with the License. You may obtain a copy of the License at
9
+
10
+
http://www.apache.org/licenses/LICENSE-2.0
11
+
12
+
Unless required by applicable law or agreed to in writing,
13
+
software distributed under the License is distributed on an
14
+
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
15
+
KIND, either express or implied. See the License for the
16
+
specific language governing permissions and limitations
17
+
under the License.
18
+
-->
19
+
20
+
# DataFusion Comet 0.14.1 Changelog
21
+
22
+
This release consists of 5 commits from 1 contributors. See credits at the end of this changelog for more information.
23
+
24
+
**Fixed bugs:**
25
+
26
+
- fix: [branch-0.14] backport #3802 - cache object stores and bucket regions to reduce DNS query volume [#3935](https://github.com/apache/datafusion-comet/pull/3935) (andygrove)
- fix: [branch-0.14] backport #3879 - skip Comet columnar shuffle for stages with DPP scans [#3934](https://github.com/apache/datafusion-comet/pull/3934) (andygrove)
29
+
- fix: [branch-0.14] backport #3914 - use min instead of max when capping write buffer size to Int range [#3936](https://github.com/apache/datafusion-comet/pull/3936) (andygrove)
30
+
- fix: [branch-0.14] backport #3865 - handle ambiguous and non-existent local times [#3937](https://github.com/apache/datafusion-comet/pull/3937) (andygrove)
31
+
32
+
## Credits
33
+
34
+
Thank you to everyone who contributed to this release. Here is a breakdown of commits (PRs merged) per contributor.
35
+
36
+
```
37
+
5 Andy Grove
38
+
```
39
+
40
+
Thank you also to everyone who contributed in other ways such as filing issues, reviewing PRs, and providing feedback on this release.
Copy file name to clipboardExpand all lines: docs/source/contributor-guide/adding_a_new_expression.md
-2Lines changed: 0 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -217,8 +217,6 @@ It is important to verify that the new expression is correctly recognized by the
217
217
Create a `.sql` file under the appropriate subdirectory in `spark/src/test/resources/sql-tests/expressions/` (e.g., `string/`, `math/`, `array/`). The file should create a table with test data, then run queries that exercise the expression. Here is an example for the `unhex` expression:
Copy file name to clipboardExpand all lines: docs/source/contributor-guide/roadmap.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,8 +26,8 @@ helpful to have a roadmap for some of the major items that require coordination
26
26
27
27
### Iceberg Integration
28
28
29
-
Iceberg tables reads are now fully native, powered by a scan operator backed by Iceberg-rust ([#2528]). We anticipate
30
-
major improvements expected in the next few releases, including bringing Iceberg table format V3 features (_e.g._,
29
+
Reads of Iceberg tables with Parquet data files are fully native and enabled by default, powered by a scan operator
30
+
backed by Iceberg-rust ([#2528]). We anticipate major improvements in the next few releases, including bringing Iceberg table format V3 features (_e.g._,
0 commit comments