You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
refactor: partitioned_lock's elaboration (apache#1540)
Extended the `try_new` interface while keeping the old one for
compatibility.
* Implemented the `try_new_suggest_cap` method, while changing the old
`try_new` method to `try_new_bit_len` to ensure compatibility.
* Modified structs and functions that call old interfaces.
* Added new unit tests
* Passed CI test
---------
Co-authored-by: chunhao.ch <chunhao@antgroup.com>
feat: support INSERT INTO SELECT (apache#1536)
Closeapache#557.
When generating the insert logical plan, alse generate the select logical plan and store it in the insert plan. Then execute the select logical plan in the insert interpreter, convert the result records into RowGroup and then insert it.
CI
refactor: insert select to stream mode (apache#1544)
Closeapache#1542
Do select and insert procedure in stream way.
CI test.
---------
Co-authored-by: jiacai2050 <dev@liujiacai.net>
fix(comment): update error documentation comment for remote engine service (apache#1548)
Updating an error comment in the code to reflect the correct service
name is needed.
No need
refactor: manifest error code (apache#1546)
fix: sequence overflow when dropping a table using a message queue as WAL (apache#1550)
Fix the issue of sequence overflow when dropping a table using a message
queue as WAL.
closeapache#1543
Check the maximum value of sequence to prevent overflow.
CI.
feat: Add a new disk-based WAL implementation for standalone deployment (apache#1552)
1. Added a struct `Segment` responsible for reading and writing segment
files, and it records the offset of each record.
2. Add a struct SegmentManager responsible for managing all segments,
including:
1. Reading all segments from the folder upon creation.
2. Writing only to the segment with the largest ID.
3. Maintaining a cache where segments not in the cache are closed, while
segments in the cache have their files open and are memory-mapped using
mmap.
3. Implement the `WalManager` trait.
Unit tests.
chore: upgrade object store version (apache#1541)
The object store version is upgraded to 0.10.1 to prepare for access to
opendal
- Impl AsyncWrite for ObjectStoreMultiUpload
- Impl MultipartUpload for ObkvMultiPartUpload
- Adapt new api on query writing path
- Existing tests
---------
Co-authored-by: jiacai2050 <dev@liujiacai.net>
feat: use opendal to access underlying storage (apache#1557)
Use opendal to access the object store, thus unifying the access method
of the underlying storage.
- use opendal to access s3/oss/local file
- Existed tests
feat: add metric engine rfc (apache#1558)
RFC for next metric engine.
No need.
chore: update link (apache#1561)
I noticed that the previous repository has been archived, maybe it would
be better to update the new link
chore(horaemeta): add building docs (apache#1562)
feat: Implementing cross-segment read/write for WAL based on local disk (apache#1556)
Improving WAL based on local disk.
This is a follow-up task for apache#1552.
1. Make MAX_FILE_SIZE configurable.
2. Allocate enough space when creating a segment to avoid remapping when
appending to the segment.
3. Add `MultiSegmentLogIterator` to enable cross-segment reading.
4. When writing, if the current segment has insufficient space, create a
new segment and write to the new segment.
Unit test.
chore: fix doc links (apache#1565)
fix: disable layered memtable in overwrite mode (apache#1533)
Layered memtable is only designed for append mode table now, and it
shouldn't be used in overwrite mode table.
- Make default values in config used.
- Add `enable` field to control layered memtable's on/off.
- Add check to prevent invalid options during table create/alter.
- Add related it cases.
Test manually.
Following cases are considered:
Check and intercept the invalid table options during table create/alter
- enable layered memtable but mutable switch threshold is 0
- enable layered memtable for overwrite mode table
Table options new field `layered_enable`'s default value when it is not
found in pb
- false, when whole `layered_memtable_options` not exist
- false, when `layered_memtable_options` exist, and
`mutable_segment_switch_threshold` == 0
- true, when `layered_memtable_options` exist, and
`mutable_segment_switch_threshold` > 0
feat: init metric engine structure (apache#1554)
See apache#1558
Add a new sub directory `horaedb`, all source codes for metric engine
are under it.
Add a new ci.
feat: Implement delete operation for WAL based on local storage (apache#1566)
Currently the WAL based on the local disk does not support the delete
function. This PR implements that functionality.
This is a follow-up task of apache#1552 and apache#1556.
1. For each `Segment`, add a hashmap to record the minimum and maximum
sequence numbers of all tables within that segment. During `delete` and
`write` operations, this hashmap will be updated. During read
operations, logs will be filtered based on this hashmap.
2. During the `delete` operation, based on the aforementioned hashmap,
if all logs of all tables in a read-only segment (a segment that is not
currently being written to) are marked as deleted, the segment file will
be physically deleted from the disk.
Unit test, TSBS and running a script locally that repeatedly inserts
data, forcibly kills, and restarts the database process to test
persistence.
fix: support to compat the old layered memtable options (apache#1568)
We introduce the explicit flag to control should we enable layered
memtable, but it has some compatibility problem when upgrading from old
version.
This pr add an option to support compating the old layered memtable
on/off control method.
Add an option to support compating the old layered memtable on/off
control method.
Manually.
chore: record replay cost in log (apache#1569)
1. Add replay cost in log
2. Remove verbose http log
3. Recover default to shard based, which is faster in most wal
implementation.
fix: logs might be missed during RegionBased replay in the WAL based on local disk (apache#1570)
In RegionBased replay, a batch of logs is first scanned from the WAL,
and then replayed on various tables using multiple threads. This
approach works fine for WALs based on tables, as the logs for each table
are clustered together. However, in a WAL based on local disk, the logs
for each table may be scattered across different positions within the
batch. During multi-threaded replay, it is possible that for a given
table, log2 is replayed before log1, resulting in missed logs.
1. Modify `split_log_batch_by_table` function to aggregate all logs for
a table together.
2. Modify `tableBatch` struct to change a single range into a
`Vec<Range>`.
Manual testing.
fix format.
Copy file name to clipboardexpand all lines: CONTRIBUTING.md
+4-4
Original file line number
Diff line number
Diff line change
@@ -5,14 +5,14 @@ To make the process easier and more valuable for everyone involved we have a few
5
5
6
6
## Submitting Issues and Feature Requests
7
7
8
-
Before you file an [issue](https://github.com/apache/incubator-horaedb/issues/new), please search existing issues in case the same or similar issues have already been filed.
8
+
Before you file an [issue](https://github.com/apache/horaedb/issues/new), please search existing issues in case the same or similar issues have already been filed.
9
9
If you find an existing open ticket covering your issue then please avoid adding "👍" or "me too" comments; GitHub notifications can cause a lot of noise for the project maintainers who triage the back-log.
10
10
However, if you have a new piece of information for an existing ticket, and you think it may help the investigation or resolution, then please do add it as a comment!
11
11
You can signal to the team that you're experiencing an existing issue with one of GitHub's emoji reactions (these are a good way to add "weight" to an issue from a prioritisation perspective).
12
12
13
13
### Submitting an Issue
14
14
15
-
The [New Issue](https://github.com/apache/incubator-horaedb/issues/new) page has templates for both bug reports and feature requests.
15
+
The [New Issue](https://github.com/apache/horaedb/issues/new) page has templates for both bug reports and feature requests.
16
16
Please fill one of them out!
17
17
The issue templates provide details on what information we will find useful to help us fix an issue.
18
18
In short though, the more information you can provide us about your environment and what behaviour you're seeing, the easier we can fix the issue.
@@ -31,14 +31,14 @@ All code must adhere to the `rustfmt` format, and pass all of the `clippy` check
31
31
32
32
To open a PR you will need to have a GitHub account.
33
33
Fork the `horaedb` repo and work on a branch on your fork.
34
-
When you have completed your changes, or you want some incremental feedback make a Pull Request to HoraeDB [here](https://github.com/apache/incubator-horaedb/compare).
34
+
When you have completed your changes, or you want some incremental feedback make a Pull Request to HoraeDB [here](https://github.com/apache/horaedb/compare).
35
35
36
36
If you want to discuss some work in progress then please prefix `[WIP]` to the
37
37
PR title.
38
38
39
39
For PRs that you consider ready for review, verify the following locally before you submit it:
40
40
41
-
* you have a coherent set of logical commits, with messages conforming to the [Conventional Commits](https://horaedb.apache.org/dev/conventional_commit.html) specification;
41
+
* you have a coherent set of logical commits, with messages conforming to the [Conventional Commits](https://horaedb.apache.org/docs/dev/conventional_commit/) specification;
42
42
* all the tests and/or benchmarks pass, including documentation tests;
43
43
* the code is correctly formatted and all `clippy` checks pass; and
44
44
* you haven't left any "code cruft" (commented out code blocks etc).
0 commit comments