Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* support dump quant error for layers * fix bug * remove useless code * format code * small change * fix ofstream * refine weights quantilization with minimize mse * Update config.yml * Update config.yml * Update test_runner.py * Update config.yml * regist evaluators for quant related IRs and support dump quant error for k210 * remove useless files * remove useless file * remove useless file * specify quant mode * fix range * Update convolution.cpp * Update quantizer.cpp * Update quantizer.cpp * Update quantizer.cpp * snake style * Update quantizer.cpp * remove assert * fix data type * Update quantizer.cpp * dump op range for import graph * add count_include_pad in tflite pool importer * revert * dump output range in order * support dump range for noptq * fix test_runner * fix bug * format issue * format issue * add k230 target in config.yml * add bitcast clamp motion pass * apply code-format changes * add do_letterbox flag * apply code-format changes * revert bitcast motion, do it in another branch * specify do_letterbox flag for each preprocess test cases * fix config * flag for ncc * use input_shpae to judge whether do letterbox or not * fix typo * fix letterbox bug * judge input shape according to both input layout and network framework type * apply code-format changes * fix shape * fix dump quant error * dump data for each layer before and after quant * apply code-format changes * fix data_dir * fix bias round issue * apply code-format changes * formatted * data type * support any bits for quant * support int16 quant * do not modify src_bin now * int16 for deq * support multiple input quant * fix set_input_tensor * exclude wrong model * fix no inputs condition * Update test_runner.py * fold matmul-bitcast-add pattern Co-authored-by: zhangjizhao <zhangjizhao@canaan-creative.com> Co-authored-by: aaltonenzhang <aaltonenzhang@users.noreply.github.com>
- Loading branch information