Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[interfaces]: Move IP/MTU information from interfaces file into database #1908

Merged
merged 1 commit into from
Aug 20, 2018

Conversation

stcheng
Copy link
Contributor

@stcheng stcheng commented Aug 9, 2018

  • Move front panel ports and port channels MTU and IP configurations out of
    the current /etc/network/interfaces file and store them in the configuration
    database.

  • The default MTU value for both front panel ports and the port channels is

  1. Add the option to indicate the desired MTU settings in the port_config.ini
    file.
  • Introduce portmgrd which will pick up the MTU configurations from the
    configuration database.

  • The updated intfmgrd will pick up IP address changes from the configuration
    database.

Signed-off-by: Shu0T1an ChenG [email protected]

@@ -29,7 +29,7 @@ def parse_port_config_file(port_config_file):
ports = {}
port_alias_map = {}
# Default column definition
titles = ['name', 'lanes', 'alias', 'index']
titles = ['name', 'lanes', 'alias', 'index', 'mtu']
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we discussed, we won't be needing mtu in portconfig.ini.

@stcheng stcheng force-pushed the intf branch 2 times, most recently from b4d3ed7 to 88cf3aa Compare August 9, 2018 19:11
@lguohan
Copy link
Collaborator

lguohan commented Aug 10, 2018

change virtual switch as well.

can you specify the dependency of your commit?

mtu {{ PORT[name]['mtu'] if PORT[name]['mtu'] else 9100 }}
address {{ prefix | ip }}
netmask {{ prefix | netmask if prefix | ipv4 else prefix | prefixlen }}
iface {{ name }} {{ 'inet' if prefix | ipv4 else 'inet6' }} manual
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we still keep up/down in the interfaces.j2? I thought we are going to remove all of them from the interfaces.j2?

@stcheng
Copy link
Contributor Author

stcheng commented Aug 10, 2018

this pull request requires the following three commits from sonic-swss as well:
sonic-net/sonic-swss#544
sonic-net/sonic-swss#545
sonic-net/sonic-swss#574

@lguohan
Copy link
Collaborator

lguohan commented Aug 16, 2018

@stcheng , all deps merged, can you update the submodule?

@stcheng
Copy link
Contributor Author

stcheng commented Aug 16, 2018

@lguohan, updated

- Move front panel ports and port channels MTU and IP configurations out of
the current /etc/network/interfaces file and store them in the configuration
database.

- The default MTU value for both front panel ports and the port channels is
9100. They are set via the minigraph or 9100 by default.

- Introduce portmgrd which will pick up the MTU configurations from the
configuration database.

- The updated intfmgrd will pick up IP address changes from the configuration
database.

- Update sonic-swss submodule

Signed-off-by: Shu0T1an ChenG <[email protected]>
@nikos-github
Copy link
Collaborator

@stcheng @lguohan @zhenggen-xu @jipanyang With this change, existing configurations are breaking on upgrade because config_db.json didn't require the admin_status (just upgraded my setup and all the interfaces were down). Any reason why we can't make the default for the port to be up if config_db.json doesn't have the admin_status to be down?

@zhenggen-xu
Copy link
Collaborator

zhenggen-xu commented Sep 18, 2018

@nikos-github I haven't tried the image with this checkin, in general, if the port is part of a VLAN or has IP address configured, it would be brought up during the initialization. I assume that was the case since you just did the upgrade. But can you double check if orchagent etc is running fine?

@nikos-github
Copy link
Collaborator

@zhenggen-xu The specific configs I had were only for front panel ports. I upgraded to newer image and all interfaces were admin down. This change as it stands will break all devices when existing configs are deployed on upgrade.

@zhenggen-xu
Copy link
Collaborator

If ports have IPs but are down due to the change of moving things to DB, that is a regression bug needs to be fixed. @stcheng can you confirm if that could be the case?

@stcheng
Copy link
Contributor Author

stcheng commented Sep 18, 2018

@zhenggen-xu yes. i have set the default configuration of the admin status as up when generating the configuration from the minigraph. however, when inheriting the old configurations, it will cause the regression issue that the ports are not brought up. @nikos-github do you have any suggestions of fixing this?

prsunny pushed a commit that referenced this pull request Sep 28, 2021
*[Submodule] update for swss with following commits:
a3fdaf4 QOS fieldvalue reference ABNF format to string changes ([sonic-platform-daemons] Update submodule #1754)
a8fcadf Add sleep to ensure starting route perf test after the vs is stable ([mellanox]: Update hw-mgmt service with the stop action #1929)
a89d1f8 Fix failing DPB LAG tests ([socat]: build socat with readline #1919)
86b4ede [portsorch] Avoid orchagent crash when set invalid interface types to port (Upgrade azure-keyvault to known compatible version #1906)
025032f [VS] Skip failing test - test_recirc_port ([rsyslog]: use # to separate container and program name in syslog msg #1918)
d338bd0 [pfcwd] Fix the polling interval time granularity (Download newer version (8.23.0-2) of rsyslog from jessie-backports in hopes of eliminating memory leaks #1912)
14c937e Enabling copp tests ([Mellanox] Update hw-management service config #1914)
fbdcaae [teammgrd]: Improve LAGs cleanup on shutdown: send SIGTERM directly to PID. ([docker-syncd-mlnx] add new mlnx-sfpd daemon to docker-syncd-mlnx #1841)
002bb1d [tlm teamd] Add retry mechanism before logging the ERR in get_dumps. ([py-swss/config] config load-minigraph failure leaves database in wrong state #1629)
57d21e7 [pfcwd] Convert polling interval from ms to us in LUA scripts ([interfaces]: Move IP/MTU information from interfaces file into database #1908)
d01524d [fgnhgorch] Enable packet flow when no FG ECMP neighbors are resolved (Update arista driver submodule to includes interrupt handling changes #1900)
8cf355d Mux state order change ([submodule] update snmpagent and dbsyncd, extending/implementing ieee802.1ab, rfc3433, rfc2737 MIBs #1902)
judyjoseph added a commit that referenced this pull request Oct 14, 2021
08b05db [pfcwd] Convert polling interval from ms to us in LUA scripts (#1908)
ebb1d6c [cfgmgr] Fix for STATE_DB Port check (#1936)
theasianpianist pushed a commit to theasianpianist/sonic-buildimage that referenced this pull request Feb 5, 2022
…net#1908)

**What I did**
Converted polling interval from milliseconds to microseconds in PFCWD LUA scripts.

**Why I did it**
PFCWD storm detection and restoration LUA scripts require values in microseconds.
Due to recent changes polling interval is now passed in milliseconds by "FlexCounter".
* sonic-net/sonic-sairedis#878

So need to convert values to microseconds (as it was before) in order to make PFCWD working,

**How I verified it**
Ran PFCWD tests from sonic-mgmt.
taras-keryk pushed a commit to taras-keryk/sonic-buildimage that referenced this pull request Apr 28, 2022
…onic-net#1923)

#### What I did
Fixing issue sonic-net#1908 

#### How I did it
- When the given patch produces empty-table, reject it since ConfigDb cannot show empty tables. Achieved that by:
  - Adding a validation step before patch-sorting
- The patch-sorter should not produce steps which result in empty-tables because it is not possible in ConfigDb, we should instead delete the whole table. Achieved that by:
  - Adding a new validator to reject moves producing empty tables
  - No need to add a generator for deleting the whole table, current generators take care of that. They do by the following:
    - The move to empty a table is rejected by `NoEmptyTableMoveValidator`
    - The previous rejected move is used to generate moves using `UpperLevelMoveExtender` which produces either
      - Remove move for parent -- This is good, we are removing the table
      - Replace move for parent -- This later on will be extended to a Delete move using `DeleteInsteadOfReplaceMoveExtender`

The generators/validators are documented in the `PatchSorter.py`, check the documentation out.

#### How to verify it
unit-test

#### Previous command output (if the output of a command-line utility has changed)
```sh
admin@vlab-01:~$ sudo config apply-patch empty-a-table.json-patch -v
Patch Applier: Patch application starting.
Patch Applier: Patch: [{"op": "replace", "path": "/DEVICE_METADATA", "value": {}}]
Patch Applier: Validating patch is not making changes to tables without YANG models.
Patch Applier: Getting current config db.
Patch Applier: Simulating the target full config after applying the patch.
Patch Applier: Validating target config according to YANG models.
... sonig-yang-mgmt logs
Patch Applier: Sorting patch updates.
... sonig-yang-mgmt logs
Patch Applier: The patch was sorted into 11 changes:
Patch Applier:   * [{"op": "remove", "path": "/DEVICE_METADATA/localhost/bgp_asn"}]
Patch Applier:   * [{"op": "remove", "path": "/DEVICE_METADATA/localhost/buffer_model"}]
Patch Applier:   * [{"op": "remove", "path": "/DEVICE_METADATA/localhost/default_bgp_status"}]
Patch Applier:   * [{"op": "remove", "path": "/DEVICE_METADATA/localhost/default_pfcwd_status"}]
Patch Applier:   * [{"op": "remove", "path": "/DEVICE_METADATA/localhost/deployment_id"}]
Patch Applier:   * [{"op": "remove", "path": "/DEVICE_METADATA/localhost/docker_routing_config_mode"}]
Patch Applier:   * [{"op": "remove", "path": "/DEVICE_METADATA/localhost/hostname"}]
Patch Applier:   * [{"op": "remove", "path": "/DEVICE_METADATA/localhost/hwsku"}]
Patch Applier:   * [{"op": "remove", "path": "/DEVICE_METADATA/localhost/mac"}]
Patch Applier:   * [{"op": "remove", "path": "/DEVICE_METADATA/localhost/platform"}]
Patch Applier:   * [{"op": "remove", "path": "/DEVICE_METADATA/localhost"}]
Patch Applier: Applying changes in order.
Patch Applier: Verifying patch updates are reflected on ConfigDB.
Failed to apply patch
Usage: config apply-patch [OPTIONS] PATCH_FILE_PATH
Try "config apply-patch -h" for help.

Error: After applying patch to config, there are still some parts not updated
admin@vlab-01:~$ 
```

The above error occurs because post the update, the whole `DEVICE_METADATA` table is deleted, not just showing as empty i.e. `"DEVICE_METADATA": {}`

#### New command output (if the output of a command-line utility has changed)
```
admin@vlab-01:~$ sudo config apply-patch empty-a-table.json-patch -v
Patch Applier: Patch application starting.
Patch Applier: Patch: [{"op": "replace", "path": "/DEVICE_METADATA", "value": {}}]
Patch Applier: Validating patch is not making changes to tables without YANG models.
Patch Applier: Getting current config db.
Patch Applier: Simulating the target full config after applying the patch.
Patch Applier: Validating target config does not have empty tables, since they do not show up in ConfigDb.
Failed to apply patch
Usage: config apply-patch [OPTIONS] PATCH_FILE_PATH
Try "config apply-patch -h" for help.

Error: Given patch is not valid because it will result in empty tables which is not allowed in ConfigDb. Table: DEVICE_METADATA
admin@vlab-01:~$ 
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants