Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

macvlan + dhcp doesn't forward hostname to dhcp server #435

Closed
kevinclevenger opened this issue Jan 10, 2020 · 9 comments · Fixed by #670
Closed

macvlan + dhcp doesn't forward hostname to dhcp server #435

kevinclevenger opened this issue Jan 10, 2020 · 9 comments · Fixed by #670

Comments

@kevinclevenger
Copy link

kevinclevenger commented Jan 10, 2020

I set up a macvlan network and when I start a container [0] it does get an address, however it appears that the container hostname is not passed to the dhcp (dnsmasq 2.80-4) server.

The entry in the dnsmasq leases file is:
1578703576 ce:a3:23:45:7b:b4 192.168.1.78 * 34:39:30:30:66:39:66:31:62:37:31:61:62:32 ( ... wth? 225 characters long for some reason!?)

Normal dhcp client lease entries:
1578670901 cc:46:4e:7b:0d:9c 192.168.1.34 little-bird 01:cc:46:4e:7b:0d:9c
1578703631 52:54:00:c0:81:1a 192.168.1.91 rhel7 *

Is there some other configuration point I need to add? Is this related to #367 ?

[0] # podman run -it --rm --hostname da1 --name da1 --network local

90-local.conflist
{
    "cniVersion": "0.4.0",
    "name": "local",
    "plugins": [
        {
            "type": "macvlan",
            "master": "enp1s0",
            "ipam": {
                "type": "dhcp"
            }
        }
    ]
}
@jellonek
Copy link
Member

At the moment current implementation does not use hostname in communication with dhcp server, it's using only clientID (in

opts[dhcp4.OptionClientIdentifier] = []byte(l.clientID)
) which is generated in this line.

@kevinclevenger
Copy link
Author

Are there plans to add OptionHostName at some point (soon I hope)?

@addreas
Copy link

addreas commented Mar 10, 2020

Dug around a bit and ended up with this patch as a workaround.

This uses the pod/container name instead of the hostname because the hostname is not currently passed to the plugin. It would be possible to pass it down but it seems like it would require changes to containers/libpd and cri-o/ocicni as well. Not familiar enough with the organizational part of the projects to know how difficult that would be. And then there are other container runtimes to consider as well.

Just for reference these are the places that would require changes, I think. Haven't tried to build a patched podman so I might be missing something.

  • add hostname to the struct here. (cri-o/ocicni)
  • include hostname in the runtime config here. (cri-o/ocicni)
  • include hostname in the struct here. (containers/libpod)

@holmesb
Copy link

holmesb commented Jun 29, 2020

We've tested @addreas' "workaround" above is functional. If many orgs are like us, using pod name as hostname is perfectly sufficient to resolve this. Would be great to get this patch merged.

@bboreham
Copy link
Contributor

bboreham commented Jul 1, 2020

This project (CNI) is set up to be cross-runtime, so it is unlikely we would add a parameter K8S_POD_NAME.

It does seem like a good idea to have a name parameter which can be used in places like this, more human-readable than the container ID which is the only ID we have at present. And then, as @addreas says callers like ocicni would need to supply this parameter. Given that, the dhcp plugin could pass it as OptionHostName.

@addreas
Copy link

addreas commented Jul 6, 2020

Definitely should add it under another name, I only used what was already available for my test. Perhaps something as simple as CNI_HOSTNAME?

It would be great to get it merged here, so that we could move on to the runtime parts of the problem. Should I open a PR @bboreham?

@bboreham
Copy link
Contributor

bboreham commented Jul 6, 2020

Sure, we can look at PR(s) (I expect there is at least one in containernetworking/cni and one in containernetworking/plugins).

Personally I would have thought the value should be unique, e.g. namespace+podname for Kubernetes, but then it doesn't match the value of $HOSTNAME inside the container. I don't know if that's important.

In the last CNI maintainers meeting @dcbw felt it should be "container name" rather than "host name".
I guess we can repaint the bikeshed later once we see what it looks like.

We were also uncertain whether the new field should go in the top-level, requiring a spec change, or use the runtime configuration mechanism which does not.

@dcbw
Copy link
Member

dcbw commented Jul 8, 2020

@bboreham @addreas yes, I'd rather have a generic CNI_CONTAINERNAME that many plugins can use (for logging or whatever at the very least), which then the DHCP plugin could repurpose as the hostname if it chose to do so. I don't think there is good justification for adding CNI_HOSTNAME because that is way too ambiguous for reasons I'm happy to discuss.

So, I would like to see a proposal for adding CNI_CONTAINERNAME to the spec, with the requirements that it should not be used as any part of a key for storing data about the container (that's what the "primary key" composed of name/id/ifname is for), that it can only contain characters that pass the CNI_CONTAINERID allowed characters check.

@addreas
Copy link

addreas commented Jul 12, 2020

Life is moving quickly at the moment, so I won't be able to spend time on a PR for this any time soon unfortunately.

Definitely agree that something that is more useful for other plugins should be the way to go, so adding CNI_CONTAINERNAME to the spec makes more sense. Assuming the hostname ambiguities are because of different runtime semantics I'd still argue that the hostname should be added as one of the "well known capabilities" in the runtime configuration. Since they are optional, it's up to the runtime to decide if the hostname makes sense.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants