Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Assertion `!should_flush_' failed. from memtable #244

Closed
honglianglu opened this issue Aug 25, 2014 · 12 comments
Closed

Assertion `!should_flush_' failed. from memtable #244

honglianglu opened this issue Aug 25, 2014 · 12 comments

Comments

@honglianglu
Copy link

I have met difficulties for using the rocksdb. And when I tried to use a Hashskiplistrep, I can not open the db. And the docs is found helpless.
I set the options as:

rocksdb::Options opt = rocksdb::Options();
opt.IncreaseParallelism(2);
opt.create_if_missing = true;
opt.max_open_files = 300000;
auto prefix_extractor = NewFixedPrefixTransform(8);
opt.prefix_extractor.reset(prefix_extractor);
opt.memtable_factory.reset(NewHashSkipListRepFactory(bucket_count, skiplist_height, skiplist_branching_factor));
opt.table_factory.reset(NewPlainTableFactory());
rocksdb::Status s = rocksdb::DB::Open(opt, DBPath, &db);

when run with this , the error come as:

db/memtable.cc:55: rocksdb::MemTable::MemTable(const rocksdb::InternalKeyComparator&, const rocksdb::Options&): Assertion `!should_flush_' failed.

I am using the latest version of the RocksDB on ubuntu 14.04.
Could any one help me? Thanks a lot!

@igorcanadi
Copy link
Collaborator

Can you try increasing write_buffer_size? I think the problem is that you configured memtable in such a way that even the empty memtable exceeds the memory limit (write_buffer_size) and needs to be flushed. What's your bucket_count?

@honglianglu
Copy link
Author

@igorcanadi Yes, thank you! And when I set the write_buffer_size to be 64M, the problem gone. and the bucket_count is set to be 1000. But I meet another problem, saying that:

util/env_posix.cc:235: rocksdb::{anonymous}::PosixRandomAccessFile::PosixRandomAccessFile(const string&, int, const rocksdb::EnvOptions&): Assertion `!options.use_mmap_reads' failed.

I found that the Envoptions can set use_mmap_reads to be true, but how can I make it take effect in the Env? I found no way to set a EnvOptions for Env variable.
Thanks a lot!

@igorcanadi
Copy link
Collaborator

Are you running on 32-bit system?

@igorcanadi
Copy link
Collaborator

This should fix it https://reviews.facebook.net/D22419

@honglianglu
Copy link
Author

@igorcanadi Yes, I am running on a 32-bit system. But I am sorry to cannot follow you. How can the https://reviews.facebook.net/D22419 solve my problem? Could you please make a explain on that? Thank you very much!

@igorcanadi
Copy link
Collaborator

Once it lands, you won't get any more assertions

igorcanadi added a commit that referenced this issue Aug 26, 2014
Summary:
See #244 (comment)
Also see this: https://github.com/facebook/rocksdb/blob/master/util/env_posix.cc#L1075

Test Plan: compiles

Reviewers: yhchiang, ljin, sdong

Reviewed By: ljin

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D22419
@igorcanadi
Copy link
Collaborator

@honglianglu does it work now?

@honglianglu
Copy link
Author

@igorcanadi thank you, it work now. as you have posted
" We're initializing PosixRandomAccessFile when use_mmap_reads is true OR when we're on 32-bit system, see: https://github.com/facebook/rocksdb/blob/master/util/env_posix.cc#L1075

Therefore, when we're on 32-bit system and use_mmap_reads is false, we will hit the assertion."

I have got about that, and I deleted the table_factory. Thanks a lot!
Then the document on how to use prefix hash should be updated,right?

@igorcanadi
Copy link
Collaborator

Why would the document need to be updated?

@honglianglu
Copy link
Author

2014-03-27-RocksDB-Meetup-Siying-Prefix-Hash.pdf is useless!

@igorcanadi
Copy link
Collaborator

Are you saying it's useless on 32-bit system?

@igorcanadi
Copy link
Collaborator

Sorry, use_mmap_reads needs to be false. That is the default anyway.

hunterlxt pushed a commit to hunterlxt/rocksdb that referenced this issue Jul 12, 2021
…#244)

Summary:
When investigating facebook#6666, we encounter an error for sst_dump to dump an ingested SST file with global seqno.
```
Corruption: An external sst file with version 2 have global seqno property with value ��/, while largest seqno in the file is 0)
```

Same as facebook#5097, it is due to SstFileReader don't know the largest seqno of a file, it will fail this check when it open a file with global seqno. https://github.com/facebook/rocksdb/blob/ca89ac2ba997dfa0e135bd75d4ccf6f5774a7eff/table/block_based_table_reader.cc#L730
Pull Request resolved: facebook#6673

Test Plan: run it manually

Reviewed By: cheng-chang

Differential Revision: D20937546

Pulled By: ajkr

fbshipit-source-id: c3fd04d60916a738533ee1885f3ea844669a9479
Signed-off-by: Connor1996 <zbk602423539@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants