-
Notifications
You must be signed in to change notification settings - Fork 233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Avoid to redistribute dc l3leaf mlag iBGP nets in VRFs to BGP #1415
Comments
@carlbuchmann @ClausHolbechArista any progress on this one for the eos_design role? |
Border leaf setups are very specific to the use case and network design. We need to be very hesitant with building too much logic into this area, but I agree, that we could add a general knob the vrf definition, to avoid the mlag peer-link being redistributed into BGP. |
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 15 days |
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 15 days |
@carlbuchmann @ClausHolbechArista any progress on this one for the eos_design role? |
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 15 days |
@c-po @kmueller68 Thank you for your patience on this issue, we haven't had the time to tackle this yet as we are focused on schema implementation and refactoring |
Hi @carlbuchmann, IMHO there is no data-model knob needed as this should always be set by the underlaying eos_design roles as I see little benefit in making the p2p underlay routes available in the overlay routing table. So this configuration can simply be "just rendered" when generating an EOS L3LS EVPN fabric. |
A possible workaround but a more generic one to stop advertisement of the peer link on all leafs in the fabric is to specify the mlag_peer_l3_ipv4_pool at the l3_leaf level and set this to a generic 169.254.0.0/24 across the board. Then create a route-map to match this as per the suggestion above:
Then add the following per VRF:
This results in 169.254.0.0/24 being used on the intra VRF IBGP links but is excluded from being advertised into the network.
From leaf perspective: Before (showing two other leaf pair's MLAG peer network):
After:
|
This issue is stale because it has been open 90 days with no activity. The issue will be reviewed by a maintainer and may be closed |
Enhancement summary
Each leaf MLAG pair in an AVD based evpn vxlan fabric gets an iBGP transfer VLAN and network configured for each vrf.
The values for this are entered globally and per fabric.yml:
mlag_peer_l3_ipv4_pool: 172.29.1.0/24
This IP subnet is choosen to be a fabric local one only.
Furthermore each vrf under the
router bgp
will haveredistribute connected
set by role default.As soon as one of the leaf pairs is used as border leafs connecting to outside routers via bgp within certain vrf(s)
the connected bgp routers will get the (local) MLAG iBGP peering subnet annouced since it is a local connected one.
This causes that the whole network world outside the fabric will know this locally/private meant MLAG iBGP
subnets from the border leafs.
It would be good to have an automated add on configuration done by the eos_designs role to avoid redsitribution of fabric ointerally meant networks to outside.
Which component of AVD is impacted
eos_designs
Use case example
each time you connect any external bgp router to an non-default VRF at any fabric l3 leaf.
Describe the solution you would like
ip prefix-list PL-TENANT_DC_TO_OUTSIDE-MLAG-IBGP
seq 10 permit 172.29.1.0/31
route-map RM-Tenant_DC_TO_OUTSIDE_PREFIX-OUT deny 10
match ip address prefix-list PL-TENANT_DC_TO_OUTSIDE-MLAG-IBGP
!
route-map RM-Tenant_DC_TO_OUTSIDE-PREFIX-OUT permit 20
router bgp
vrf A
redistribute connected route-map RM-Tenant_DC_TO_OUTSIDE_PREFIX-OUT
vrf B
redistribute connected route-map RM-Tenant_DC_TO_OUTSIDE_PREFIX-OUT
....
Describe alternatives you have considered
Do the above solution manually in your fabric files and hope to not forget it.
Additional context
No response
Contributing Guide
The text was updated successfully, but these errors were encountered: