From 7a78703952b8de0cb29919f60f946109622370e5 Mon Sep 17 00:00:00 2001 From: pettershao-ragilenetworks <81281940+pettershao-ragilenetworks@users.noreply.github.com> Date: Thu, 8 Jul 2021 13:27:59 +0800 Subject: [PATCH 01/15] Install dotnet core to fix python gcov warning for code covery color bar showing (#215) **- What I did** fix python gcov warning "Please install dotnet core to enable automatic generation of Html report" **- How I did it** install dotnet core --- azure-pipelines.yml | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/azure-pipelines.yml b/azure-pipelines.yml index 09ee1dbaf..f73a8e06a 100644 --- a/azure-pipelines.yml +++ b/azure-pipelines.yml @@ -44,7 +44,7 @@ stages: - script: | set -ex - + sudo apt-get -y purge libhiredis-dev libnl-3-dev libnl-route-3-dev sudo dpkg -i ../target/debs/buster/{libswsscommon_1.0.0_amd64.deb,python3-swsscommon_1.0.0_amd64.deb,libnl-3-200_*.deb,libnl-genl-3-200_*.deb,libnl-nf-3-200_*.deb,libnl-route-3-200_*.deb,libhiredis0.14_*.deb} sudo python3 -m pip install ../target/python-wheels/swsssdk*-py3-*.whl sudo python3 -m pip install ../target/python-wheels/sonic_py_common-1.0-py3-none-any.whl @@ -59,6 +59,15 @@ stages: python3 setup.py test displayName: "Unit tests" + - script: | + set -ex + # Install .NET CORE + curl -sSL https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add - + sudo apt-add-repository https://packages.microsoft.com/debian/10/prod + sudo apt-get update + sudo apt-get install -y dotnet-sdk-5.0 + displayName: "Install .NET CORE" + - task: PublishTestResults@2 inputs: testResultsFiles: '$(System.DefaultWorkingDirectory)/test-results.xml' From 0813b42440e3da3a499f3f4bcc644926f2375d02 Mon Sep 17 00:00:00 2001 From: Raphael Tryster <75927947+raphaelt-nvidia@users.noreply.github.com> Date: Mon, 12 Jul 2021 19:48:39 +0300 Subject: [PATCH 02/15] Entries under .1.3.6.1.2.1.31.1.1.1.18 OID should return the "description" field of PORT_TABLE entries in APPL_DB or CONFIG_DB. (#224) - What I did This is a correction of #218, which is contained in Azure/sonic-buildimage#7859, after community decided that entries under .1.3.6.1.2.1.31.1.1.1.18 OID should return the "description" field of PORT_TABLE entries in APPL_DB or CONFIG_DB. For vlan, management and LAG, these are empty strings. - How I did it Deleted the lines of code quoted by Suvarna in the above PRs. This necessitated modifying 4 unit tests that had been written under the assumption that these OIDs would return non-empty data. - How to verify it Run unit tests in build and snmp tests in sonic-mgmt. - Description for the changelog Entries under .1.3.6.1.2.1.31.1.1.1.18 OID should return the "description" field of PORT_TABLE entries in APPL_DB or CONFIG_DB. Signed-off-by: Raphael Tryster --- src/sonic_ax_impl/mibs/ietf/rfc2863.py | 12 +----------- tests/namespace/test_interfaces.py | 10 ++++++---- tests/test_interfaces.py | 10 ++++++---- 3 files changed, 13 insertions(+), 19 deletions(-) diff --git a/src/sonic_ax_impl/mibs/ietf/rfc2863.py b/src/sonic_ax_impl/mibs/ietf/rfc2863.py index e064edc72..e4f8c8a0f 100644 --- a/src/sonic_ax_impl/mibs/ietf/rfc2863.py +++ b/src/sonic_ax_impl/mibs/ietf/rfc2863.py @@ -210,18 +210,8 @@ def interface_alias(self, sub_id): if not entry: return + # This returns empty values for LAG, vlan & mgmt, which is the expected result result = entry.get("description", "") - - if not result: - #RFC2863 tables don't have descriptions for LAG, vlan & mgmt; take from RFC1213 - oid = self.get_oid(sub_id) - if oid in self.oid_lag_name_map: - result = self.oid_lag_name_map[oid] - elif oid in self.mgmt_oid_name_map: - result = self.mgmt_alias_map[self.mgmt_oid_name_map[oid]] - elif oid in self.vlan_oid_name_map: - result = self.vlan_oid_name_map[oid] - return result def get_counter32(self, sub_id, table_name): diff --git a/tests/namespace/test_interfaces.py b/tests/namespace/test_interfaces.py index 944787f74..4ee723b20 100644 --- a/tests/namespace/test_interfaces.py +++ b/tests/namespace/test_interfaces.py @@ -911,7 +911,8 @@ def test_mgmt_iface_description_ifMIB(self): def test_vlan_iface_ifMIB(self): """ - Test that vlan interface is present in the ifMIB OID path of the MIB + Test that vlan interface is present in the ifMIB OID path of the MIB. + It is empty because there is no corresponding entry in config DB. """ oid = ObjectIdentifier(12, 0, 0, 0, (1, 3, 6, 1, 2, 1, 31, 1, 1, 1, 18, 2999)) get_pdu = GetNextPDU( @@ -926,11 +927,12 @@ def test_vlan_iface_ifMIB(self): value0 = response.values[0] self.assertEqual(value0.type_, ValueType.OCTET_STRING) self.assertEqual(str(value0.name), str(ObjectIdentifier(12, 0, 1, 0, (1, 3, 6, 1, 2, 1, 31, 1, 1, 1, 18, 3000)))) - self.assertEqual(str(value0.data), 'Vlan1000') + self.assertEqual(str(value0.data), '') def test_vlan_iface_description_ifMIB(self): """ - Test vlan interface description (which is simply the name) in the ifMIB OID path of the MIB + Test vlan interface description in the ifMIB OID path of the MIB. + It is empty because there is no corresponding entry in config DB. """ oid = ObjectIdentifier(12, 0, 0, 0, (1, 3, 6, 1, 2, 1, 31, 1, 1, 1, 18, 3000)) get_pdu = GetPDU( @@ -945,6 +947,6 @@ def test_vlan_iface_description_ifMIB(self): value0 = response.values[0] self.assertEqual(value0.type_, ValueType.OCTET_STRING) self.assertEqual(str(value0.name), str(ObjectIdentifier(12, 0, 1, 0, (1, 3, 6, 1, 2, 1, 31, 1, 1, 1, 18, 3000)))) - self.assertEqual(str(value0.data), 'Vlan1000') + self.assertEqual(str(value0.data), '') diff --git a/tests/test_interfaces.py b/tests/test_interfaces.py index 101fcdc82..581cc2fb2 100755 --- a/tests/test_interfaces.py +++ b/tests/test_interfaces.py @@ -914,7 +914,8 @@ def test_mgmt_iface_description_ifMIB(self): def test_vlan_iface_ifMIB(self): """ - Test that vlan interface is present in the ifMIB OID path of the MIB + Test that vlan interface is present in the ifMIB OID path of the MIB. + It is empty because there is no corresponding entry in config DB. """ oid = ObjectIdentifier(12, 0, 0, 0, (1, 3, 6, 1, 2, 1, 31, 1, 1, 1, 18, 2999)) get_pdu = GetNextPDU( @@ -929,11 +930,12 @@ def test_vlan_iface_ifMIB(self): value0 = response.values[0] self.assertEqual(value0.type_, ValueType.OCTET_STRING) self.assertEqual(str(value0.name), str(ObjectIdentifier(12, 0, 1, 0, (1, 3, 6, 1, 2, 1, 31, 1, 1, 1, 18, 3000)))) - self.assertEqual(str(value0.data), 'Vlan1000') + self.assertEqual(str(value0.data), '') def test_vlan_iface_description_ifMIB(self): """ - Test vlan interface description (which is simply the name) in the ifMIB OID path of the MIB + Test vlan interface description in the ifMIB OID path of the MIB. + It is empty because there is no corresponding entry in config DB. """ oid = ObjectIdentifier(12, 0, 0, 0, (1, 3, 6, 1, 2, 1, 31, 1, 1, 1, 18, 3000)) get_pdu = GetPDU( @@ -948,7 +950,7 @@ def test_vlan_iface_description_ifMIB(self): value0 = response.values[0] self.assertEqual(value0.type_, ValueType.OCTET_STRING) self.assertEqual(str(value0.name), str(ObjectIdentifier(12, 0, 1, 0, (1, 3, 6, 1, 2, 1, 31, 1, 1, 1, 18, 3000)))) - self.assertEqual(str(value0.data), 'Vlan1000') + self.assertEqual(str(value0.data), '') def test_vlan_iface_1213_2863_consistent(self): """ From 21d7d97c6944fcd1df116bb8f60561e7d1e15608 Mon Sep 17 00:00:00 2001 From: Qi Luo Date: Mon, 12 Jul 2021 09:52:19 -0700 Subject: [PATCH 03/15] Fix: SonicV2Connector behavior change: get_all will return empty dict if (#226) the hash does not exist in Redis - What I did Fixes Azure/sonic-buildimage#8140 ref: swsssdk implementation returns None, and the library will be deprecated libswsscommon implementation returns empty dict - How I did it Relax the condition check to accept both representations - How to verify it Unit test Signed-off-by: Qi Luo --- src/sonic_ax_impl/mibs/ietf/rfc1213.py | 2 +- src/sonic_ax_impl/mibs/ietf/rfc4292.py | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/src/sonic_ax_impl/mibs/ietf/rfc1213.py b/src/sonic_ax_impl/mibs/ietf/rfc1213.py index 53c143c6e..812cb2107 100644 --- a/src/sonic_ax_impl/mibs/ietf/rfc1213.py +++ b/src/sonic_ax_impl/mibs/ietf/rfc1213.py @@ -79,7 +79,7 @@ def _update_from_db(self): neigh_str = neigh_key db_index = self.neigh_key_list[neigh_key] neigh_info = self.db_conn[db_index].get_all(mibs.APPL_DB, neigh_key, blocking=False) - if neigh_info is None: + if not neigh_info: continue ip_family = neigh_info['family'] if ip_family == "IPv4": diff --git a/src/sonic_ax_impl/mibs/ietf/rfc4292.py b/src/sonic_ax_impl/mibs/ietf/rfc4292.py index de75c05cf..ea5965e6e 100644 --- a/src/sonic_ax_impl/mibs/ietf/rfc4292.py +++ b/src/sonic_ax_impl/mibs/ietf/rfc4292.py @@ -65,7 +65,7 @@ def update_data(self): continue port_table = multi_asic.get_port_table_for_asic(db_conn.namespace) ent = db_conn.get_all(mibs.APPL_DB, route_str, blocking=False) - if ent is None: + if not ent: continue nexthops = ent["nexthop"] ifnames = ent["ifname"] From 4d6bb79cd1cad90a936bfb046d464476579eb964 Mon Sep 17 00:00:00 2001 From: Qi Luo Date: Mon, 2 Aug 2021 23:59:42 -0700 Subject: [PATCH 04/15] Non-block reading counters to tolerate corrupted/delayed counters in COUNTERS_DB (#229) **- What I did** Interface counters in COUNTERS_DB may be corrupted or delayed. We could not assume they are always available. **- How to verify it** Unit test and smoke test on DUT --- src/sonic_ax_impl/mibs/ietf/rfc1213.py | 14 +-- src/sonic_ax_impl/mibs/ietf/rfc2863.py | 7 +- .../mibs/vendor/cisco/ciscoPfcExtMIB.py | 8 +- tests/mock_tables/counters_db.json | 102 ------------------ tests/mock_tables/dbconnector.py | 2 +- 5 files changed, 20 insertions(+), 113 deletions(-) diff --git a/src/sonic_ax_impl/mibs/ietf/rfc1213.py b/src/sonic_ax_impl/mibs/ietf/rfc1213.py index 812cb2107..6e410ea79 100644 --- a/src/sonic_ax_impl/mibs/ietf/rfc1213.py +++ b/src/sonic_ax_impl/mibs/ietf/rfc1213.py @@ -262,8 +262,9 @@ def update_if_counters(self): namespace, sai_id = mibs.split_sai_id_key(sai_id_key) if_idx = mibs.get_index_from_str(self.if_id_map[sai_id_key]) counters_db_data = self.namespace_db_map[namespace].get_all(mibs.COUNTERS_DB, - mibs.counter_table(sai_id), - blocking=True) + mibs.counter_table(sai_id)) + if counters_db_data is None: + counters_db_data = {} self.if_counters[if_idx] = { counter: int(value) for counter, value in counters_db_data.items() } @@ -272,8 +273,9 @@ def update_rif_counters(self): rif_sai_ids = list(self.rif_port_map) + list(self.vlan_name_map) for sai_id in rif_sai_ids: counters_db_data = Namespace.dbs_get_all(self.db_conn, mibs.COUNTERS_DB, - mibs.counter_table(mibs.split_sai_id_key(sai_id)[1]), - blocking=False) + mibs.counter_table(mibs.split_sai_id_key(sai_id)[1])) + if counters_db_data is None: + counters_db_data = {} self.rif_counters[sai_id] = { counter: int(value) for counter, value in counters_db_data.items() } @@ -358,8 +360,8 @@ def aggregate_counters(self): port_idx = mibs.get_index_from_str(self.if_id_map[port_sai_id]) for port_counter_name, rif_counter_name in mibs.RIF_DROPS_AGGR_MAP.items(): self.if_counters[port_idx][port_counter_name] = \ - self.if_counters[port_idx][port_counter_name] + \ - self.rif_counters[rif_sai_id][rif_counter_name] + self.if_counters[port_idx].get(port_counter_name, 0) + \ + self.rif_counters[rif_sai_id].get(rif_counter_name, 0) for vlan_sai_id, vlan_name in self.vlan_name_map.items(): for port_counter_name, rif_counter_name in mibs.RIF_COUNTERS_AGGR_MAP.items(): diff --git a/src/sonic_ax_impl/mibs/ietf/rfc2863.py b/src/sonic_ax_impl/mibs/ietf/rfc2863.py index e4f8c8a0f..96d6bf3e4 100644 --- a/src/sonic_ax_impl/mibs/ietf/rfc2863.py +++ b/src/sonic_ax_impl/mibs/ietf/rfc2863.py @@ -145,8 +145,11 @@ def update_data(self): for sai_id_key in self.if_id_map: namespace, sai_id = mibs.split_sai_id_key(sai_id_key) if_idx = mibs.get_index_from_str(self.if_id_map[sai_id_key]) - self.if_counters[if_idx] = self.namespace_db_map[namespace].get_all(mibs.COUNTERS_DB, \ - mibs.counter_table(sai_id), blocking=True) + counter_table = self.namespace_db_map[namespace].get_all(mibs.COUNTERS_DB, \ + mibs.counter_table(sai_id)) + if counter_table is None: + counter_table = {} + self.if_counters[if_idx] = counter_table self.lag_name_if_name_map, \ self.if_name_lag_name_map, \ diff --git a/src/sonic_ax_impl/mibs/vendor/cisco/ciscoPfcExtMIB.py b/src/sonic_ax_impl/mibs/vendor/cisco/ciscoPfcExtMIB.py index 51acc8fe4..c2630d322 100644 --- a/src/sonic_ax_impl/mibs/vendor/cisco/ciscoPfcExtMIB.py +++ b/src/sonic_ax_impl/mibs/vendor/cisco/ciscoPfcExtMIB.py @@ -47,8 +47,12 @@ def update_data(self): for sai_id_key in self.if_id_map: namespace, sai_id = mibs.split_sai_id_key(sai_id_key) if_idx = mibs.get_index_from_str(self.if_id_map[sai_id_key]) - self.if_counters[if_idx] = self.namespace_db_map[namespace].get_all(mibs.COUNTERS_DB, \ - mibs.counter_table(sai_id), blocking=True) + counter_table = self.namespace_db_map[namespace].get_all(mibs.COUNTERS_DB, \ + mibs.counter_table(sai_id)) + if counter_table is None: + counter_table = {} + self.if_counters[if_idx] = counter_table + self.lag_name_if_name_map, \ self.if_name_lag_name_map, \ diff --git a/tests/mock_tables/counters_db.json b/tests/mock_tables/counters_db.json index 6914ffa42..a120b8a61 100755 --- a/tests/mock_tables/counters_db.json +++ b/tests/mock_tables/counters_db.json @@ -203,108 +203,6 @@ "SAI_PORT_STAT_PFC_7_RX_PKTS": "8", "SAI_PORT_STAT_PFC_7_TX_PKTS": "8" }, - "COUNTERS:oid:0x1000000000020": { - "SAI_PORT_STAT_ETHER_STATS_TX_NO_ERRORS": "0", - "SAI_PORT_STAT_ETHER_STATS_OVERSIZE_PKTS": "0", - "SAI_PORT_STAT_IF_OUT_ERRORS": "0", - "SAI_PORT_STAT_ETHER_TX_OVERSIZE_PKTS": "0", - "SAI_PORT_STAT_ETHER_STATS_PKTS_1519_TO_2047_OCTETS": "0", - "SAI_PORT_STAT_IP_IN_RECEIVES": "0", - "SAI_PORT_STAT_ETHER_IN_PKTS_64_OCTETS": "0", - "SAI_PORT_STAT_IPV6_OUT_UCAST_PKTS": "0", - "SAI_PORT_STAT_ETHER_OUT_PKTS_4096_TO_9216_OCTETS": "0", - "SAI_PORT_STAT_IF_IN_ERRORS": "0", - "SAI_PORT_STAT_ETHER_STATS_PKTS": "0", - "SAI_PORT_STAT_ETHER_STATS_BROADCAST_PKTS": "0", - "SAI_PORT_STAT_IF_IN_DISCARDS": "0", - "SAI_PORT_STAT_IP_OUT_DISCARDS": "0", - "SAI_PORT_STAT_IF_IN_UNKNOWN_PROTOS": "0", - "SAI_PORT_STAT_IPV6_IN_DISCARDS": "0", - "SAI_PORT_STAT_IPV6_OUT_DISCARDS": "0", - "SAI_PORT_STAT_IPV6_IN_OCTETS": "0", - "SAI_PORT_STAT_ETHER_OUT_PKTS_65_TO_127_OCTETS": "0", - "SAI_PORT_STAT_IF_IN_BROADCAST_PKTS": "0", - "SAI_PORT_STAT_ETHER_IN_PKTS_1519_TO_2047_OCTETS": "0", - "SAI_PORT_STAT_IF_OUT_MULTICAST_PKTS": "0", - "SAI_PORT_STAT_ETHER_OUT_PKTS_512_TO_1023_OCTETS": "0", - "SAI_PORT_STAT_ETHER_STATS_PKTS_256_TO_511_OCTETS": "0", - "SAI_PORT_STAT_ETHER_STATS_PKTS_9217_TO_16383_OCTETS": "0", - "SAI_PORT_STAT_ETHER_IN_PKTS_512_TO_1023_OCTETS": "0", - "SAI_PORT_STAT_IPV6_IN_NON_UCAST_PKTS": "0", - "SAI_PORT_STAT_ETHER_IN_PKTS_4096_TO_9216_OCTETS": "0", - "SAI_PORT_STAT_IF_OUT_BROADCAST_PKTS": "0", - "SAI_PORT_STAT_IPV6_OUT_NON_UCAST_PKTS": "0", - "SAI_PORT_STAT_IF_IN_VLAN_DISCARDS": "0", - "SAI_PORT_STAT_ETHER_IN_PKTS_65_TO_127_OCTETS": "0", - "SAI_PORT_STAT_IP_IN_NON_UCAST_PKTS": "0", - "SAI_PORT_STAT_ETHER_STATS_FRAGMENTS": "0", - "SAI_PORT_STAT_IPV6_IN_UCAST_PKTS": "0", - "SAI_PORT_STAT_IPV6_IN_RECEIVES": "0", - "SAI_PORT_STAT_ETHER_STATS_PKTS_4096_TO_9216_OCTETS": "0", - "SAI_PORT_STAT_IF_OUT_DISCARDS": "0", - "SAI_PORT_STAT_ETHER_STATS_DROP_EVENTS": "0", - "SAI_PORT_STAT_IPV6_OUT_MCAST_PKTS": "0", - "SAI_PORT_STAT_ETHER_RX_OVERSIZE_PKTS": "0", - "SAI_PORT_STAT_IF_OUT_OCTETS": "0", - "SAI_PORT_STAT_IF_IN_NON_UCAST_PKTS": "0", - "SAI_PORT_STAT_ETHER_IN_PKTS_9217_TO_16383_OCTETS": "0", - "SAI_PORT_STAT_ETHER_OUT_PKTS_1024_TO_1518_OCTETS": "0", - "SAI_PORT_STAT_ETHER_STATS_PKTS_2048_TO_4095_OCTETS": "0", - "SAI_PORT_STAT_ETHER_STATS_PKTS_512_TO_1023_OCTETS": "0", - "SAI_PORT_STAT_ETHER_OUT_PKTS_1519_TO_2047_OCTETS": "0", - "SAI_PORT_STAT_ETHER_STATS_RX_NO_ERRORS": "0", - "SAI_PORT_STAT_ETHER_OUT_PKTS_64_OCTETS": "0", - "SAI_PORT_STAT_ETHER_STATS_COLLISIONS": "0", - "SAI_PORT_STAT_ETHER_IN_PKTS_1024_TO_1518_OCTETS": "0", - "SAI_PORT_STAT_ETHER_STATS_PKTS_1024_TO_1518_OCTETS": "0", - "SAI_PORT_STAT_ETHER_IN_PKTS_256_TO_511_OCTETS": "0", - "SAI_PORT_STAT_ETHER_OUT_PKTS_2048_TO_4095_OCTETS": "0", - "SAI_PORT_STAT_ETHER_STATS_OCTETS": "0", - "SAI_PORT_STAT_IF_OUT_UCAST_PKTS": "0", - "SAI_PORT_STAT_ETHER_STATS_UNDERSIZE_PKTS": "0", - "SAI_PORT_STAT_ETHER_OUT_PKTS_128_TO_255_OCTETS": "0", - "SAI_PORT_STAT_ETHER_STATS_PKTS_64_OCTETS": "0", - "SAI_PORT_STAT_IP_OUT_OCTETS": "0", - "SAI_PORT_STAT_IF_IN_UCAST_PKTS": "0", - "SAI_PORT_STAT_ETHER_OUT_PKTS_9217_TO_16383_OCTETS": "0", - "SAI_PORT_STAT_ETHER_IN_PKTS_2048_TO_4095_OCTETS": "0", - "SAI_PORT_STAT_IP_OUT_NON_UCAST_PKTS": "0", - "SAI_PORT_STAT_ETHER_STATS_JABBERS": "0", - "SAI_PORT_STAT_IF_IN_OCTETS": "0", - "SAI_PORT_STAT_IPV6_IN_MCAST_PKTS": "0", - "SAI_PORT_STAT_ETHER_STATS_PKTS_65_TO_127_OCTETS": "0", - "SAI_PORT_STAT_IF_OUT_QLEN": "0", - "SAI_PORT_STAT_ETHER_STATS_PKTS_128_TO_255_OCTETS": "0", - "SAI_PORT_STAT_IP_IN_DISCARDS": "0", - "SAI_PORT_STAT_IPV6_OUT_OCTETS": "0", - "SAI_PORT_STAT_IF_OUT_NON_UCAST_PKTS": "0", - "SAI_PORT_STAT_IP_IN_OCTETS": "0", - "SAI_PORT_STAT_ETHER_OUT_PKTS_256_TO_511_OCTETS": "0", - "SAI_PORT_STAT_ETHER_STATS_CRC_ALIGN_ERRORS": "0", - "SAI_PORT_STAT_IP_OUT_UCAST_PKTS": "0", - "SAI_PORT_STAT_IP_IN_UCAST_PKTS": "0", - "SAI_PORT_STAT_ETHER_STATS_MULTICAST_PKTS": "0", - "SAI_PORT_STAT_ETHER_IN_PKTS_128_TO_255_OCTETS": "0", - "SAI_PORT_STAT_IF_IN_MULTICAST_PKTS": "0", - "SAI_PORT_STAT_PAUSE_RX_PKTS": "0", - "SAI_PORT_STAT_PAUSE_TX_PKTS": "0", - "SAI_PORT_STAT_PFC_0_RX_PKTS": "1", - "SAI_PORT_STAT_PFC_0_TX_PKTS": "1", - "SAI_PORT_STAT_PFC_1_RX_PKTS": "2", - "SAI_PORT_STAT_PFC_1_TX_PKTS": "2", - "SAI_PORT_STAT_PFC_2_RX_PKTS": "3", - "SAI_PORT_STAT_PFC_2_TX_PKTS": "3", - "SAI_PORT_STAT_PFC_3_RX_PKTS": "4", - "SAI_PORT_STAT_PFC_3_TX_PKTS": "4", - "SAI_PORT_STAT_PFC_4_RX_PKTS": "5", - "SAI_PORT_STAT_PFC_4_TX_PKTS": "5", - "SAI_PORT_STAT_PFC_5_RX_PKTS": "6", - "SAI_PORT_STAT_PFC_5_TX_PKTS": "6", - "SAI_PORT_STAT_PFC_6_RX_PKTS": "7", - "SAI_PORT_STAT_PFC_6_TX_PKTS": "7", - "SAI_PORT_STAT_PFC_7_RX_PKTS": "8", - "SAI_PORT_STAT_PFC_7_TX_PKTS": "8" - }, "COUNTERS:oid:0x1000000000021": { "SAI_PORT_STAT_ETHER_STATS_TX_NO_ERRORS": "0", "SAI_PORT_STAT_ETHER_STATS_OVERSIZE_PKTS": "0", diff --git a/tests/mock_tables/dbconnector.py b/tests/mock_tables/dbconnector.py index 84ce65e7b..e6a30e660 100644 --- a/tests/mock_tables/dbconnector.py +++ b/tests/mock_tables/dbconnector.py @@ -65,7 +65,7 @@ def connect_SonicV2Connector(self, db_name, retry_on=True): _old_connect_SonicV2Connector(self, db_name, retry_on) -def _subscribe_keyspace_notification(self, db_name, client): +def _subscribe_keyspace_notification(self, db_name): pass From 43b5e1abefb7a1690e21d07eaa97479478bd5454 Mon Sep 17 00:00:00 2001 From: Vivek Reddy Date: Wed, 4 Aug 2021 20:42:59 -0700 Subject: [PATCH 05/15] CPU Spike because of redundant and flooded keyspace notifis handled (#230) **- What I did** Fixes [#8293](https://github.com/Azure/sonic-buildimage/issues/8293) **- How I did it** Accumulated all the older notifications and did act only upon the latest notification discarding the others --- src/sonic_ax_impl/mibs/ieee802_1ab.py | 28 +++++++++++++++++--------- tests/test_lldp.py | 29 +++++++++++++++++++++++++++ 2 files changed, 48 insertions(+), 9 deletions(-) diff --git a/src/sonic_ax_impl/mibs/ieee802_1ab.py b/src/sonic_ax_impl/mibs/ieee802_1ab.py index 852635236..989a455f8 100644 --- a/src/sonic_ax_impl/mibs/ieee802_1ab.py +++ b/src/sonic_ax_impl/mibs/ieee802_1ab.py @@ -96,6 +96,18 @@ def poll_lldp_entry_updates(pubsub): return ret return data, interface, if_index +def get_latest_notification(pubsub): + """ + Fetches the latest notification recorded on a lldp entry. + """ + latest_update_map = {} + while True: + data, interface, if_index = poll_lldp_entry_updates(pubsub) + if not data: + break + latest_update_map[interface] = (data, if_index) + return latest_update_map + def parse_sys_capability(sys_cap): return bytearray([int (x, 16) for x in sys_cap.split()]) @@ -542,19 +554,17 @@ def _update_per_namespace_data(self, pubsub): """ Listen to updates in APP DB, update local cache """ - while True: - data, interface, if_index = poll_lldp_entry_updates(pubsub) - - if not data: - break - + event_cache = get_latest_notification(pubsub) + for interface in event_cache: + data = event_cache[interface][0] + if_index = event_cache[interface][1] + if "set" in data: self.update_rem_if_mgmt(if_index, interface) elif "del" in data: - # some remote data about that neighbor is gone, del it and try to query again + # if del is the latest notification, then delete it from the local cache self.if_range = [sub_oid for sub_oid in self.if_range if sub_oid[0] != if_index] - self.update_rem_if_mgmt(if_index, interface) - + def update_data(self): for i in range(len(self.db_conn)): if not self.pubsub[i]: diff --git a/tests/test_lldp.py b/tests/test_lldp.py index 0ff3e1059..d97491d8e 100644 --- a/tests/test_lldp.py +++ b/tests/test_lldp.py @@ -17,7 +17,12 @@ from ax_interface.pdu import PDU, PDUHeader from ax_interface.mib import MIBTable from sonic_ax_impl.mibs import ieee802_1ab +from mock import patch +def mock_poll_lldp_notif(mock_lldp_polled_entries): + if not mock_lldp_polled_entries: + return None, None, None + return mock_lldp_polled_entries.pop(0) class TestLLDPMIB(TestCase): @classmethod @@ -314,3 +319,27 @@ def test_getnextpdu_lldpRemSysCapEnabled(self): self.assertEqual(value0.type_, ValueType.OCTET_STRING) self.assertEqual(str(value0.name), str(ObjectIdentifier(12, 0, 1, 0, (1, 0, 8802, 1, 1, 2, 1, 4, 1, 1, 12, 1, 1)))) self.assertEqual(str(value0.data), "\x28\x00") + + @patch("sonic_ax_impl.mibs.ieee802_1ab.poll_lldp_entry_updates", mock_poll_lldp_notif) + def test_get_latest_notification(self): + mock_lldp_polled_entries = [] + mock_lldp_polled_entries.extend([("hset", "Ethernet0", "123"), + ("hset", "Ethernet4", "124"), + ("del", "Ethernet4", "124"), + ("del", "Ethernet8", "125"), + ("hset", "Ethernet8", "125"), + ("hset", "Ethernet4", "124"), + ("del", "Ethernet0", "123"), + ("hset", "Ethernet12", "126"), + ("del", "Ethernet12", "126"), + ("hset", "Ethernet0", "123"), + ("del", "Ethernet16", "127")]) + event_cache = ieee802_1ab.get_latest_notification(mock_lldp_polled_entries) + expect = {"Ethernet0" : ("hset", "123"), + "Ethernet4" : ("hset", "124"), + "Ethernet8" : ("hset", "125"), + "Ethernet12" : ("del", "126"), + "Ethernet16" : ("del", "127")} + for key in expect.keys(): + assert key in event_cache + self.assertEqual(expect[key], event_cache[key]) From fccb21b673fd1b6579043aaf73e6ba0cfd3e57af Mon Sep 17 00:00:00 2001 From: SuvarnaMeenakshi <50386592+SuvarnaMeenakshi@users.noreply.github.com> Date: Mon, 30 Aug 2021 16:36:30 -0700 Subject: [PATCH 06/15] [RFC1213]: Initialize lag oid map in reinit_data instead of (#232) - What I did Initialize lag oid map in reinit_data instead of updating in update_data. Updating lag oid map in update_data can cause descrepancy as interface oid map is updated in reinit_data. There could be a scenario when lag oid map has lag oids updated in update_data but interfaces oid map is not updated as interfaces oid map is updated in reinit_data. When SNMP service comes up, there could be a short instance of time when interfaces oid map (oid_sai_map) is empty or not complete but lag oid map (oid_lag_name_map) is updated with lag oids and lag members. At this short span, if a SNMP query is done to get interfaces counters, current code will try to get LAG counters and will fail at https://github.com/Azure/sonic-snmpagent/blob/master/src/sonic_ax_impl/mibs/ietf/rfc1213.py#L383 when trying to get oid of lag members' oid. This can lead to key error : sai_id = self.oid_sai_map[oid]#012KeyError. This change is to avoid this scenario. - How I did it init_sync_d_lag_tables in reinit_data instead of invoking in update_data. - How to verify it unit-test pass. SNMP walk of interface MIB output is complete. --- src/sonic_ax_impl/mibs/ietf/rfc1213.py | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/src/sonic_ax_impl/mibs/ietf/rfc1213.py b/src/sonic_ax_impl/mibs/ietf/rfc1213.py index 6e410ea79..a6c47ec1a 100644 --- a/src/sonic_ax_impl/mibs/ietf/rfc1213.py +++ b/src/sonic_ax_impl/mibs/ietf/rfc1213.py @@ -235,6 +235,11 @@ def reinit_data(self): self.rif_port_map, \ self.port_rif_map = Namespace.get_sync_d_from_all_namespace(mibs.init_sync_d_rif_tables, self.db_conn) + self.lag_name_if_name_map, \ + self.if_name_lag_name_map, \ + self.oid_lag_name_map, \ + self.lag_sai_map, self.sai_lag_map = Namespace.get_sync_d_from_all_namespace(mibs.init_sync_d_lag_tables, self.db_conn) + def update_data(self): """ Update redis (caches config) @@ -246,11 +251,6 @@ def update_data(self): self.aggregate_counters() - self.lag_name_if_name_map, \ - self.if_name_lag_name_map, \ - self.oid_lag_name_map, \ - self.lag_sai_map, self.sai_lag_map = Namespace.get_sync_d_from_all_namespace(mibs.init_sync_d_lag_tables, self.db_conn) - self.if_range = sorted(list(self.oid_name_map.keys()) + list(self.oid_lag_name_map.keys()) + list(self.mgmt_oid_name_map.keys()) + From c2d494504d35c0efa2629d83ad582f18ea4c809f Mon Sep 17 00:00:00 2001 From: Lior Avramov <73036155+liorghub@users.noreply.github.com> Date: Fri, 17 Sep 2021 08:16:26 +0300 Subject: [PATCH 07/15] [snmp] Allow system with no ports in config db run without errors (#221) **What I did** Allow system with no ports in config db run without errors. This is needed for modular system which should boot properly without line cards. **How I did it** Remove snmpagent error exit if there are no ports in config DB or in counters DB. **How to verify it** Run snmpwalk on the root oid. --- src/sonic_ax_impl/mibs/__init__.py | 18 ++++++------------ src/sonic_ax_impl/mibs/ieee802_1ab.py | 2 +- tests/test_mibs.py | 18 ++++++++++++++++++ 3 files changed, 25 insertions(+), 13 deletions(-) diff --git a/src/sonic_ax_impl/mibs/__init__.py b/src/sonic_ax_impl/mibs/__init__.py index e482169de..a8e94fc29 100644 --- a/src/sonic_ax_impl/mibs/__init__.py +++ b/src/sonic_ax_impl/mibs/__init__.py @@ -271,7 +271,7 @@ def init_sync_d_interface_tables(db_conn): # { if_name (SONiC) -> sai_id } # ex: { "Ethernet76" : "1000000000023" } - if_name_map_util, if_id_map_util = port_util.get_interface_oid_map(db_conn) + if_name_map_util, if_id_map_util = port_util.get_interface_oid_map(db_conn, blocking=False) for if_name, sai_id in if_name_map_util.items(): if_name_str = if_name if (re.match(port_util.SONIC_ETHERNET_RE_PATTERN, if_name_str) or \ @@ -297,12 +297,8 @@ def init_sync_d_interface_tables(db_conn): # SyncD consistency checks. if not oid_name_map: - # In the event no interface exists that follows the SONiC pattern, no OIDs are able to be registered. - # A RuntimeError here will prevent the 'main' module from loading. (This is desirable.) - message = "No interfaces found matching pattern '{}'. SyncD database is incoherent." \ - .format(port_util.SONIC_ETHERNET_RE_PATTERN) - logger.error(message) - raise RuntimeError(message) + logger.debug("There are no ports in counters DB") + return {}, {}, {}, {} elif len(if_id_map) < len(if_name_map) or len(oid_name_map) < len(if_name_map): # a length mismatch indicates a bad interface name logger.warning("SyncD database contains incoherent interface names. Interfaces must match pattern '{}'" @@ -424,7 +420,7 @@ def init_sync_d_queue_tables(db_conn): # { Port name : Queue index (SONiC) -> sai_id } # ex: { "Ethernet0:2" : "1000000000023" } - queue_name_map = db_conn.get_all(COUNTERS_DB, COUNTERS_QUEUE_NAME_MAP, blocking=True) + queue_name_map = db_conn.get_all(COUNTERS_DB, COUNTERS_QUEUE_NAME_MAP, blocking=False) logger.debug("Queue name map:\n" + pprint.pformat(queue_name_map, indent=2)) # Parse the queue_name_map and create the following maps: @@ -455,10 +451,8 @@ def init_sync_d_queue_tables(db_conn): # SyncD consistency checks. if not port_queues_map: - # In the event no queue exists that follows the SONiC pattern, no OIDs are able to be registered. - # A RuntimeError here will prevent the 'main' module from loading. (This is desirable.) - logger.error("No queues found in the Counter DB. SyncD database is incoherent.") - raise RuntimeError('The port_queues_map is not defined') + logger.debug("Counters DB does not contain ports") + return {}, {}, {} elif not queue_stat_map: logger.error("No queue stat counters found in the Counter DB. SyncD database is incoherent.") raise RuntimeError('The queue_stat_map is not defined') diff --git a/src/sonic_ax_impl/mibs/ieee802_1ab.py b/src/sonic_ax_impl/mibs/ieee802_1ab.py index 989a455f8..85989c992 100644 --- a/src/sonic_ax_impl/mibs/ieee802_1ab.py +++ b/src/sonic_ax_impl/mibs/ieee802_1ab.py @@ -196,7 +196,7 @@ def reinit_data(self): self.if_range.append((if_oid, )) self.if_range.sort() if not self.loc_port_data: - logger.warning("0 - b'PORT_TABLE' is empty. No local port information could be retrieved.") + logger.debug("0 - b'PORT_TABLE' is empty. No local port information could be retrieved.") def _get_if_entry(self, if_name): if_table = "" diff --git a/tests/test_mibs.py b/tests/test_mibs.py index f8389d656..0f4367dec 100644 --- a/tests/test_mibs.py +++ b/tests/test_mibs.py @@ -4,6 +4,11 @@ import tests.mock_tables.dbconnector +if sys.version_info.major == 3: + from unittest import mock +else: + import mock + modules_path = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) sys.path.insert(0, os.path.join(modules_path, 'src')) @@ -32,3 +37,16 @@ def test_init_sync_d_lag_tables(self): self.assertTrue("PortChannel_Temp" in lag_name_if_name_map) self.assertTrue(lag_name_if_name_map["PortChannel_Temp"] == []) self.assertTrue(lag_sai_map["PortChannel01"] == "2000000000006") + + @mock.patch('swsssdk.dbconnector.SonicV2Connector.get_all', mock.MagicMock(return_value=({}))) + def test_init_sync_d_interface_tables(self): + db_conn = Namespace.init_namespace_dbs() + + if_name_map, \ + if_alias_map, \ + if_id_map, \ + oid_name_map = Namespace.get_sync_d_from_all_namespace(mibs.init_sync_d_interface_tables, db_conn) + self.assertTrue(if_name_map == {}) + self.assertTrue(if_alias_map == {}) + self.assertTrue(if_id_map == {}) + self.assertTrue(oid_name_map == {}) From a07da536186a9d9698ec63ec3b1e75bef097ae00 Mon Sep 17 00:00:00 2001 From: Raphael Tryster <75927947+raphaelt-nvidia@users.noreply.github.com> Date: Tue, 26 Oct 2021 17:18:06 +0300 Subject: [PATCH 08/15] Removed unused variables in rfc2863.py (#237) **- What I did** Removed unused variables rif_port_map and port_rif_map in rfc2863.py **- How I did it** Edited rfc2863.py as requested by @qiluo-msft in merged PR https://github.com/Azure/sonic-snmpagent/pull/218. **- How to verify it** pytest-3 test_interfaces.py and pytest-3 namespace/test_interfaces.py in sonic-snmpagent/tests. **- Description for the changelog** Removed unused variables. --- src/sonic_ax_impl/mibs/ietf/rfc2863.py | 4 ---- 1 file changed, 4 deletions(-) diff --git a/src/sonic_ax_impl/mibs/ietf/rfc2863.py b/src/sonic_ax_impl/mibs/ietf/rfc2863.py index 96d6bf3e4..6cdbc518c 100644 --- a/src/sonic_ax_impl/mibs/ietf/rfc2863.py +++ b/src/sonic_ax_impl/mibs/ietf/rfc2863.py @@ -93,7 +93,6 @@ def __init__(self): self.mgmt_alias_map = {} self.vlan_oid_name_map = {} self.vlan_name_map = {} - self.rif_port_map = {} self.if_counters = {} self.if_range = [] self.if_name_map = {} @@ -128,9 +127,6 @@ def reinit_data(self): self.vlan_oid_sai_map, \ self.vlan_oid_name_map = Namespace.get_sync_d_from_all_namespace(mibs.init_sync_d_vlan_tables, self.db_conn) - self.rif_port_map, \ - self.port_rif_map = Namespace.get_sync_d_from_all_namespace(mibs.init_sync_d_rif_tables, self.db_conn) - self.if_range = sorted(list(self.oid_name_map.keys()) + list(self.oid_lag_name_map.keys()) + list(self.mgmt_oid_name_map.keys()) + From df615c4e6cb750cf6a2b3f88d67822d485ecc16f Mon Sep 17 00:00:00 2001 From: "Marty Y. Lok" <76118573+mlok-nokia@users.noreply.github.com> Date: Thu, 11 Nov 2021 18:09:32 -0500 Subject: [PATCH 09/15] [Voq][Inband] Support the Ethernet-IB port (#228) Modified init_sync_d_interface_tables() to support the VoQ Inband interface Ethernet-IB port name This commit depends on -- Azure/sonic-py-swsssdk#113 which create the Inband index and name mapping --- src/sonic_ax_impl/mibs/__init__.py | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/src/sonic_ax_impl/mibs/__init__.py b/src/sonic_ax_impl/mibs/__init__.py index a8e94fc29..0c3a7dba0 100644 --- a/src/sonic_ax_impl/mibs/__init__.py +++ b/src/sonic_ax_impl/mibs/__init__.py @@ -275,7 +275,8 @@ def init_sync_d_interface_tables(db_conn): for if_name, sai_id in if_name_map_util.items(): if_name_str = if_name if (re.match(port_util.SONIC_ETHERNET_RE_PATTERN, if_name_str) or \ - re.match(port_util.SONIC_ETHERNET_BP_RE_PATTERN, if_name_str)): + re.match(port_util.SONIC_ETHERNET_BP_RE_PATTERN, if_name_str) or \ + re.match(port_util.SONIC_ETHERNET_IB_RE_PATTERN, if_name_str)): if_name_map[if_name] = sai_id # As sai_id is not unique in multi-asic platform, concatenate it with # namespace to get a unique key. Assuming that ':' is not present in namespace @@ -283,7 +284,8 @@ def init_sync_d_interface_tables(db_conn): # sai_id_key = namespace : sai_id for sai_id, if_name in if_id_map_util.items(): if (re.match(port_util.SONIC_ETHERNET_RE_PATTERN, if_name) or \ - re.match(port_util.SONIC_ETHERNET_BP_RE_PATTERN, if_name)): + re.match(port_util.SONIC_ETHERNET_BP_RE_PATTERN, if_name) or \ + re.match(port_util.SONIC_ETHERNET_IB_RE_PATTERN, if_name)): if_id_map[get_sai_id_key(db_conn.namespace, sai_id)] = if_name logger.debug("Port name map:\n" + pprint.pformat(if_name_map, indent=2)) logger.debug("Interface name map:\n" + pprint.pformat(if_id_map, indent=2)) From b8ea609321431aaf4385ca999fe02691c7cfd5ce Mon Sep 17 00:00:00 2001 From: SuvarnaMeenakshi <50386592+SuvarnaMeenakshi@users.noreply.github.com> Date: Fri, 31 Dec 2021 12:58:36 -0800 Subject: [PATCH 10/15] Modify path of python wheels to be installed. (#240) **- What I did** Modify path of python wheels to be installed as per changes in sonic-buildimage repo. **- How I did it** Modify path of swsssdk and sonic-py-common python wheels. --- azure-pipelines.yml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/azure-pipelines.yml b/azure-pipelines.yml index f73a8e06a..a2041f229 100644 --- a/azure-pipelines.yml +++ b/azure-pipelines.yml @@ -46,8 +46,8 @@ stages: set -ex sudo apt-get -y purge libhiredis-dev libnl-3-dev libnl-route-3-dev sudo dpkg -i ../target/debs/buster/{libswsscommon_1.0.0_amd64.deb,python3-swsscommon_1.0.0_amd64.deb,libnl-3-200_*.deb,libnl-genl-3-200_*.deb,libnl-nf-3-200_*.deb,libnl-route-3-200_*.deb,libhiredis0.14_*.deb} - sudo python3 -m pip install ../target/python-wheels/swsssdk*-py3-*.whl - sudo python3 -m pip install ../target/python-wheels/sonic_py_common-1.0-py3-none-any.whl + sudo python3 -m pip install ../target/python-wheels/buster/swsssdk*-py3-*.whl + sudo python3 -m pip install ../target/python-wheels/buster/sonic_py_common-1.0-py3-none-any.whl python3 setup.py bdist_wheel cp dist/*.whl $(Build.ArtifactStagingDirectory)/ displayName: "Build" From 3013597856aa1173f4aa7e4cbd13041e96344acc Mon Sep 17 00:00:00 2001 From: SuvarnaMeenakshi <50386592+SuvarnaMeenakshi@users.noreply.github.com> Date: Mon, 3 Jan 2022 17:18:47 -0800 Subject: [PATCH 11/15] Fix Queue stat unavailable error seen during SNMP service start (#238) - What I did Ideally SNMP service starts only after swss/syncd comes up. But due to timing of bring up, It could happen that SNMP is trying to retrieve queue stat counter before it is update by syncd. This is only seen once when the SNMP service comes up, as from the next iteration, syncd has updated the queue stats and it is available for SNMP to use. ERR snmp#snmp-subagent [sonic_ax_impl] ERROR: No queue stat counters found in the Counter DB. SyncD database is incoherent. This message is seen only once in all the cases observed, which means that once the counters are populated, snmp is able to retrieve the counters. - How I did it If counters are not found, return empty dicts since SNMP is just supposed to collect data and provide the data it has. - How to verify it Added unit-test. If counters_db is not update, querying the QueueStats MIB should not return any output. --- src/sonic_ax_impl/mibs/__init__.py | 7 +++---- tests/test_mibs.py | 12 ++++++++++++ 2 files changed, 15 insertions(+), 4 deletions(-) diff --git a/src/sonic_ax_impl/mibs/__init__.py b/src/sonic_ax_impl/mibs/__init__.py index 0c3a7dba0..fb5d0be1c 100644 --- a/src/sonic_ax_impl/mibs/__init__.py +++ b/src/sonic_ax_impl/mibs/__init__.py @@ -44,7 +44,6 @@ redis_kwargs = {'unix_socket_path': '/var/run/redis/redis.sock'} - def get_neigh_info(neigh_key): """ split neigh_key string of the format: @@ -455,9 +454,9 @@ def init_sync_d_queue_tables(db_conn): if not port_queues_map: logger.debug("Counters DB does not contain ports") return {}, {}, {} - elif not queue_stat_map: - logger.error("No queue stat counters found in the Counter DB. SyncD database is incoherent.") - raise RuntimeError('The queue_stat_map is not defined') + if not queue_stat_map: + logger.debug("No queue stat counters found in the Counter DB.") + return {}, {}, {} for queues in port_queue_list_map.values(): queues.sort() diff --git a/tests/test_mibs.py b/tests/test_mibs.py index 0f4367dec..b8b680ae3 100644 --- a/tests/test_mibs.py +++ b/tests/test_mibs.py @@ -3,6 +3,7 @@ from unittest import TestCase import tests.mock_tables.dbconnector +from sonic_ax_impl import mibs if sys.version_info.major == 3: from unittest import mock @@ -50,3 +51,14 @@ def test_init_sync_d_interface_tables(self): self.assertTrue(if_alias_map == {}) self.assertTrue(if_id_map == {}) self.assertTrue(oid_name_map == {}) + + @mock.patch('swsssdk.dbconnector.SonicV2Connector.get_all', mock.MagicMock(return_value=({}))) + def test_init_sync_d_queue_tables(self): + mock_queue_stat_map = {} + db_conn = Namespace.init_namespace_dbs() + + port_queues_map, queue_stat_map, port_queue_list_map = \ + Namespace.get_sync_d_from_all_namespace(mibs.init_sync_d_queue_tables, db_conn) + self.assertTrue(port_queues_map == {}) + self.assertTrue(queue_stat_map == {}) + self.assertTrue(port_queue_list_map == {}) From 4ee573cddc4e356b589bcf29c19cedc4562a8b34 Mon Sep 17 00:00:00 2001 From: Kebo Liu Date: Wed, 2 Feb 2022 01:03:39 +0800 Subject: [PATCH 12/15] fix RFC2737 with update xcvr vendor version key name (#241) - What I did In the TRANSCEIVER_INFO table of STATE_DB, the key of transceiver reversion was changed from "hardware_rev" to "vendor_rev", detail info please refer to PR Azure/sonic-platform-daemons#231 RFC2737 implementation needs to be updated with the new key name in order to get the correct info from the state DB - How I did it Update the key name from "hardware_rev" to "vendor_rev", update the unit test cases. - How to verify it Run the community SNMP test. - Description for the changelog --- src/sonic_ax_impl/mibs/ietf/rfc2737.py | 2 +- tests/mock_tables/asic0/state_db.json | 2 +- tests/mock_tables/asic1/state_db.json | 2 +- tests/mock_tables/state_db.json | 4 ++-- 4 files changed, 5 insertions(+), 5 deletions(-) diff --git a/src/sonic_ax_impl/mibs/ietf/rfc2737.py b/src/sonic_ax_impl/mibs/ietf/rfc2737.py index 9c123bf25..43bd0cffb 100644 --- a/src/sonic_ax_impl/mibs/ietf/rfc2737.py +++ b/src/sonic_ax_impl/mibs/ietf/rfc2737.py @@ -96,7 +96,7 @@ class XcvrInfoDB(str, Enum): Transceiver info keys """ TYPE = "type" - HARDWARE_REVISION = "hardware_rev" + VENDOR_REVISION = "vendor_rev" SERIAL_NUMBER = "serial" MANUFACTURE_NAME = "manufacturer" MODEL_NAME = "model" diff --git a/tests/mock_tables/asic0/state_db.json b/tests/mock_tables/asic0/state_db.json index 0793070eb..bf0405422 100644 --- a/tests/mock_tables/asic0/state_db.json +++ b/tests/mock_tables/asic0/state_db.json @@ -1,7 +1,7 @@ { "TRANSCEIVER_INFO|Ethernet0": { "type": "QSFP+", - "hardware_rev": "A1", + "vendor_rev": "A1", "serial": "SERIAL_NUM", "manufacturer": "VENDOR_NAME", "model": "MODEL_NAME" diff --git a/tests/mock_tables/asic1/state_db.json b/tests/mock_tables/asic1/state_db.json index f68a58aa0..9af1a6316 100644 --- a/tests/mock_tables/asic1/state_db.json +++ b/tests/mock_tables/asic1/state_db.json @@ -1,7 +1,7 @@ { "TRANSCEIVER_INFO|Ethernet8": { "type": "QSFP+", - "hardware_rev": "A1", + "vendor_rev": "A1", "serial": "SERIAL_NUM", "manufacturer": "VENDOR_NAME", "model": "MODEL_NAME" diff --git a/tests/mock_tables/state_db.json b/tests/mock_tables/state_db.json index ae45b16a4..b90a10d60 100644 --- a/tests/mock_tables/state_db.json +++ b/tests/mock_tables/state_db.json @@ -34,7 +34,7 @@ }, "TRANSCEIVER_INFO|Ethernet0": { "type": "QSFP+", - "hardware_rev": "A1", + "vendor_rev": "A1", "serial": "SERIAL_NUM", "manufacturer": "VENDOR_NAME", "model": "MODEL_NAME", @@ -42,7 +42,7 @@ }, "TRANSCEIVER_INFO|Ethernet1": { "type": "QSFP-DD", - "hardware_rev": "A1", + "vendor_rev": "A1", "serial": "SERIAL_NUM", "manufacturer": "VENDOR_NAME", "model": "MODEL_NAME", From dae8146b157e174e5782ddae2a8d877aa1bf7790 Mon Sep 17 00:00:00 2001 From: Qi Luo Date: Fri, 11 Feb 2022 16:05:28 -0800 Subject: [PATCH 13/15] [ci]: Support code diff coverage (#243) Support code diff coverage, and set threshold to 50% --- azure-pipelines.yml | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/azure-pipelines.yml b/azure-pipelines.yml index a2041f229..95e3ccd37 100644 --- a/azure-pipelines.yml +++ b/azure-pipelines.yml @@ -19,7 +19,9 @@ stages: - job: displayName: "build" timeoutInMinutes: 60 - + variables: + DIFF_COVER_CHECK_THRESHOLD: 50 + DIFF_COVER_ENABLE: 'true' pool: vmImage: ubuntu-20.04 From 6bd51c46d2325b8e6f2397e735464f43ad4c4f41 Mon Sep 17 00:00:00 2001 From: mad4321 <40406420+mad4321@users.noreply.github.com> Date: Sat, 12 Mar 2022 00:53:31 +0200 Subject: [PATCH 14/15] Fix: LAG counters, if LAG don't have L3 interface (#236) **- What I did** A KeyError exception raised in rfc1213.py if LAG port don't have L3 interface ```` Oct 25 14:10:29.864852 sonic ERR snmp#snmp-subagent [ax_interface] ERROR: SubtreeMIBEntry.__call__() caught an unexpected exception during _callable_.__call__() #012Traceback (most recent call last): #012 File "/usr/local/lib/python3.7/dist-packages/ax_interface/mib.py", line 194, in __call__ #012 return self._callable_.__call__(sub_id, *self._callable_args) #012 File "/usr/local/lib/python3.7/dist-packages/sonic_ax_impl/mibs/ietf/rfc1213.py", line 413, in get_counter #012 sai_lag_rif_id = self.port_rif_map[sai_lag_id]#012KeyError: '20000000007c2' ```` **- How I did it** Checked if sai_lag_id is contained in port_rif_map **- How to verify it** Build docker-snmp. No exception is observed. --- src/sonic_ax_impl/mibs/ietf/rfc1213.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/sonic_ax_impl/mibs/ietf/rfc1213.py b/src/sonic_ax_impl/mibs/ietf/rfc1213.py index a6c47ec1a..8caff2a31 100644 --- a/src/sonic_ax_impl/mibs/ietf/rfc1213.py +++ b/src/sonic_ax_impl/mibs/ietf/rfc1213.py @@ -410,7 +410,7 @@ def get_counter(self, sub_id, table_name): # self.lag_sai_map['PortChannel01'] = '2000000000006' # self.port_rif_map['2000000000006'] = '6000000000006' sai_lag_id = self.lag_sai_map[self.oid_lag_name_map[oid]] - sai_lag_rif_id = self.port_rif_map[sai_lag_id] + sai_lag_rif_id = self.port_rif_map[sai_lag_id] if sai_lag_id in self.port_rif_map else None if sai_lag_rif_id in self.rif_port_map: # Extract the 'name' part of 'table_name'. # Example: From 2654f4a667941296d4e56a16e8e1a7d1d5fca7b6 Mon Sep 17 00:00:00 2001 From: liuh-80 <58683130+liuh-80@users.noreply.github.com> Date: Fri, 18 Mar 2022 16:19:42 +0800 Subject: [PATCH 15/15] Fix snmp agent Initialize config DB multiple times issue (#245) **- What I did** Fix following code issue: 1. When initialize SonicDBConfig on multi ASIC device, not check if the global config already initialized issue. 2. Initialize SonicDBConfig multiple times with dupe code. **- How I did it** Code change to check isGlobalInit before load global config. Move dupe code to Namespace.init_sonic_db_config() and initialize config only once. Add new mock method for SonicDBConfig.isGlobalInit **- How to verify it** Pass all UT and E2E test. **- Description for the changelog** --- src/sonic_ax_impl/main.py | 4 +++- src/sonic_ax_impl/mibs/__init__.py | 37 ++++++++++++++++++++---------- tests/mock_tables/dbconnector.py | 4 ++++ 3 files changed, 32 insertions(+), 13 deletions(-) diff --git a/src/sonic_ax_impl/main.py b/src/sonic_ax_impl/main.py index 672a70171..281ab601b 100644 --- a/src/sonic_ax_impl/main.py +++ b/src/sonic_ax_impl/main.py @@ -9,7 +9,7 @@ import sys import ax_interface -from sonic_ax_impl.mibs import ieee802_1ab +from sonic_ax_impl.mibs import ieee802_1ab, Namespace from . import logger from .mibs.ietf import rfc1213, rfc2737, rfc2863, rfc3433, rfc4292, rfc4363 from .mibs.vendor import dell, cisco @@ -58,6 +58,8 @@ def main(update_frequency=None): global event_loop try: + Namespace.init_sonic_db_config() + # initialize handler and set update frequency (or use the default) agent = ax_interface.Agent(SonicMIB, update_frequency or DEFAULT_UPDATE_FREQUENCY, event_loop) diff --git a/src/sonic_ax_impl/mibs/__init__.py b/src/sonic_ax_impl/mibs/__init__.py index fb5d0be1c..86044d502 100644 --- a/src/sonic_ax_impl/mibs/__init__.py +++ b/src/sonic_ax_impl/mibs/__init__.py @@ -218,16 +218,11 @@ def init_db(): Connects to DB :return: db_conn """ - if not SonicDBConfig.isInit(): - if multi_asic.is_multi_asic(): - # Load the global config file database_global.json once. - SonicDBConfig.load_sonic_global_db_config() - else: - SonicDBConfig.load_sonic_db_config() + Namespace.init_sonic_db_config() + # SyncD database connector. THIS MUST BE INITIALIZED ON A PER-THREAD BASIS. # Redis PubSub objects (such as those within swsssdk) are NOT thread-safe. db_conn = SonicV2Connector(**redis_kwargs) - return db_conn def init_mgmt_interface_tables(db_conn): @@ -536,14 +531,32 @@ def get_oidvalue(self, oid): return self.oid_map[oid] class Namespace: + + """ + Sonic database initialized flag. + """ + db_config_loaded = False + + @staticmethod + def init_sonic_db_config(): + """ + Initialize SonicDBConfig + """ + if Namespace.db_config_loaded: + return + + if multi_asic.is_multi_asic(): + # Load the global config file database_global.json once. + SonicDBConfig.load_sonic_global_db_config() + else: + SonicDBConfig.load_sonic_db_config() + + Namespace.db_config_loaded = True + @staticmethod def init_namespace_dbs(): db_conn = [] - if not SonicDBConfig.isInit(): - if multi_asic.is_multi_asic(): - SonicDBConfig.load_sonic_global_db_config() - else: - SonicDBConfig.load_sonic_db_config() + Namespace.init_sonic_db_config() host_namespace_idx = 0 for idx, namespace in enumerate(SonicDBConfig.get_ns_list()): if namespace == multi_asic.DEFAULT_NAMESPACE: diff --git a/tests/mock_tables/dbconnector.py b/tests/mock_tables/dbconnector.py index e6a30e660..6a5cbd997 100644 --- a/tests/mock_tables/dbconnector.py +++ b/tests/mock_tables/dbconnector.py @@ -25,6 +25,9 @@ def clean_up_config(): SonicDBConfig._sonic_db_global_config_init = False SonicDBConfig._sonic_db_config_init = False +def mock_SonicDBConfig_isGlobalInit(): + return SonicDBConfig._sonic_db_global_config_init + # TODO Convert this to fixture as all Test classes require it. def load_namespace_config(): @@ -140,6 +143,7 @@ def keys(self, pattern='*'): SonicV2Connector.connect = connect_SonicV2Connector swsscommon.SonicV2Connector = SonicV2Connector swsscommon.SonicDBConfig = SonicDBConfig +swsscommon.SonicDBConfig.isGlobalInit = mock_SonicDBConfig_isGlobalInit # pytest case collecting will import some module before monkey patch, so reload from importlib import reload