Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix RR_1.1 to add_all_groups to add right amount of groups per endpoint #25767

Conversation

tehampson
Copy link
Contributor

@tehampson tehampson commented Mar 21, 2023

This bring code in line to what is actually written in the RR_1.1 spec. There was a bug in add_all_groups that wrote all wrote all the groups to each of the endpoints instead, leading to writting more then the maximum of supported groups per fabric.

One of the 4 PRs needed to fix #25532

This bring code in line to what is actually written in the RR_1.1 spec.
There was a bug in add_all_groups that wrote all wrote all the groups
to each of the endpoints instead, leading to writting more then the
maximum of supported groups per fabric.
@github-actions
Copy link

PR #25767: Size comparison from 934a6b1 to a384917

Decreases (2 builds for cc32xx, qpg)
platform target config section 934a6b1 a384917 change % change
cc32xx lock CC3235SF_LAUNCHXL .debug_info 20248022 20248021 -1 -0.0
qpg lock-app qpg6105+debug (read/write) 1121320 1121312 -8 -0.0
.text 568420 568412 -8 -0.0
Full report (5 builds for cc32xx, linux, qpg)
platform target config section 934a6b1 a384917 change % change
cc32xx lock CC3235SF_LAUNCHXL 0 0 0 0.0
(read only) 645601 645601 0 0.0
(read/write) 203848 203848 0 0.0
.ARM.attributes 44 44 0 0.0
.ARM.exidx 8 8 0 0.0
.bss 197248 197248 0 0.0
.comment 194 194 0 0.0
.data 1480 1480 0 0.0
.debug_abbrev 930292 930292 0 0.0
.debug_aranges 87400 87400 0 0.0
.debug_frame 300320 300320 0 0.0
.debug_info 20248022 20248021 -1 -0.0
.debug_line 2661345 2661345 0 0.0
.debug_loc 2805489 2805489 0 0.0
.debug_ranges 283264 283264 0 0.0
.debug_str 3027174 3027174 0 0.0
.ramVecs 780 780 0 0.0
.resetVecs 64 64 0 0.0
.rodata 105993 105993 0 0.0
.shstrtab 232 232 0 0.0
.stab 204 204 0 0.0
.stabstr 441 441 0 0.0
.stack 2048 2048 0 0.0
.strtab 380421 380421 0 0.0
.symtab 257408 257408 0 0.0
.text 537488 537488 0 0.0
linux chip-tool-ipv6only arm64 (read only) 12132100 12132100 0 0.0
(read/write) 742648 742648 0 0.0
.bss 34392 34392 0 0.0
.data 3008 3008 0 0.0
.data.rel.ro 684520 684520 0 0.0
.dynamic 560 560 0 0.0
.got 15512 15512 0 0.0
.init 24 24 0 0.0
.init_array 216 216 0 0.0
.rodata 585588 585588 0 0.0
.text 9784196 9784196 0 0.0
thermostat-no-ble arm64 (read only) 2523916 2523916 0 0.0
(read/write) 145240 145240 0 0.0
.bss 56344 56344 0 0.0
.data 1784 1784 0 0.0
.data.rel.ro 77696 77696 0 0.0
.dynamic 560 560 0 0.0
.got 5368 5368 0 0.0
.init 24 24 0 0.0
.init_array 432 432 0 0.0
.rodata 150904 150904 0 0.0
.text 2110624 2110624 0 0.0
qpg lighting-app qpg6105+debug (read/write) 1152768 1152768 0 0.0
.bss 96036 96036 0 0.0
.data 852 852 0 0.0
.text 599864 599864 0 0.0
lock-app qpg6105+debug (read/write) 1121320 1121312 -8 -0.0
.bss 91172 91172 0 0.0
.data 856 856 0 0.0
.text 568420 568412 -8 -0.0

@@ -644,11 +644,17 @@ async def add_all_groups(self,

base_groups_per_endpoint: int = math.floor(groups_per_fabric / len(group_endpoints))
groups_remainder: int = groups_per_fabric % len(group_endpoints)
fabric_group_index: int = 0

group_id_list = [group_id for group_id in (group_key_map[client_idx])]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
group_id_list = [group_id for group_id in (group_key_map[client_idx])]
group_id_list = list(group_key_map[client_idx])

might be clearer, but either way. And maybe call it group_id_list_copy?

@@ -644,11 +644,17 @@ async def add_all_groups(self,

base_groups_per_endpoint: int = math.floor(groups_per_fabric / len(group_endpoints))
groups_remainder: int = groups_per_fabric % len(group_endpoints)
fabric_group_index: int = 0

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After chatting on this issue we have filed https://github.com/CHIP-Specifications/chip-test-plans/issues/2502. As a result the code here will need to change again to properly test. As a results I am moving this PR to a draft to wait on feedback on if https://github.com/CHIP-Specifications/chip-test-plans/issues/2502 is in fact what we want Step 14 of this test to be actually doing.

@cletnick
Copy link
Contributor

LGTM % comment

@tehampson
Copy link
Contributor Author

This PR is incorporated into #25789 which contains additional fixes to resolve the issue

@tehampson tehampson closed this Mar 23, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[BUG] TC_RR_1_1 group sub-test not run if Groups cluster endpoint is not 0
3 participants