Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nasl_pread: Failed to close file descriptor (only on certain systems) #242

Open
LoZio opened this issue Jan 8, 2024 · 63 comments
Open

nasl_pread: Failed to close file descriptor (only on certain systems) #242

LoZio opened this issue Jan 8, 2024 · 63 comments
Assignees
Labels
bug Something isn't working

Comments

@LoZio
Copy link

LoZio commented Jan 8, 2024

** Please attach large files to the report instead of pasting the contents into the report. **

Describe the bug
Installed latest container as of today. Log says it's
Greenbone Vulnerability Manager version 23.1.0 (DB revision 255)
First I upgraded the existing data using the volume I had before, then I created a brand new volume and container and started from scratch.
I create a target or use one of the dozens I have and have always used.
I start a scan using full and fast config, but it is the same with any of my configurations I always used.
The only data that the scanner gets is the ping result. None of the test in the scan config is performed.
If I look at the processes that run during the scan the usual ones do not even start.
The reports says 0 ports are available.
If I tcpdump the interface I see the ICMP traffic but nothing about tcp ports.
It seems nmap does not start at all, so no test are performed on the 0 open ports.
If I start a bash in the container and manually run nmap on the same IPs I want to scan it works with no problems (shows the open ports on targets, so network is also checked ok) and I can sniff the tcp probes.
In the logs nothing says it failed, for example it reports:

OSPD[683] 2024-01-08 16:44:56,433: INFO: (ospd.ospd) Starting scan 7e21607b-640a-4751-a6f6-15dfeff8f5d1.
OSPD[683] 2024-01-08 16:46:37,685: INFO: (ospd.ospd) 7e21607b-640a-4751-a6f6-15dfeff8f5d1: Host scan finished.
OSPD[683] 2024-01-08 16:46:37,692: INFO: (ospd.ospd) 7e21607b-640a-4751-a6f6-15dfeff8f5d1: Scan finished.

I changed any of the alive tests, always does the same thing, stops the scan after the ping phase.

To Reproduce
Steps to reproduce the behavior:
Just used docker start with :latest image.
Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
image

Environment (please complete the following information):

  • OS: Ubuntu 22.04
  • Memory available to OS: 18G
  • Container environment used with version:
    Client: Docker Engine - Community
    Version: 24.0.7
    API version: 1.43
    Go version: go1.20.10
    Git commit: afdd53b
    Built: Thu Oct 26 09:08:01 2023
    OS/Arch: linux/amd64
    Context: default

Server: Docker Engine - Community
Engine:
Version: 24.0.7
API version: 1.43 (minimum version 1.12)
Go version: go1.20.10
Git commit: 311b9ff
Built: Thu Oct 26 09:08:01 2023
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.26
GitCommit: 3dd1e886e55dd695541fdcd67420c2888645a495
runc:
Version: 1.1.10
GitCommit: v1.1.10-0-g18a0cb0
docker-init:
Version: 0.19.0
GitCommit: de40ad0

logs ( commands assume the container name is 'openvas' )
Please attach the output from one of the following commands:

docker

docker logs openvas > logfile.log

Podman

podman logs openvas > logfile.log

docker-compose

docker-compose logs > logfile.log

Please "attach" the file instead of pasting the conents to the issue.

@immauss
Copy link
Owner

immauss commented Jan 8, 2024

Is the container showing as healthy?

-Scott

@LoZio
Copy link
Author

LoZio commented Jan 9, 2024

Yes, and it is actually healthy since it all works, also reports are being generated. It just seem not to start the nmap so there are no ports to scan.

@immauss
Copy link
Owner

immauss commented Jan 9, 2024

Can you run a scan against this container:
immauss/scannable

use:
User: scannable
password: Passw0rd

Just fire it up, check the logs for its IP, and then create a target for it.

might even be easier to use the docker-compose.yml that is in the "testing" directory. It will start a scannable container on the same network.

@LoZio
Copy link
Author

LoZio commented Jan 11, 2024

Did a clean setup of everithing. New openvas:latest container from scratch, updated.
Started a scannable container (got 172.17.0.3 ip). I can login via ssh to the scannable from the openvas
image
So the docker openvas can reach the ssh of the scannable instance.

I create a new target for 172.17.0.3. I create a new scan with full and fast profile
The profile has all the things
image

Still the scan task terminates with no problems, only finding the icmp result and no errors
image

@immauss
Copy link
Owner

immauss commented Jan 11, 2024

Curious ... what port list are you assigning to the target?

-Scott

@LoZio
Copy link
Author

LoZio commented Jan 11, 2024

image

I also tested a set with only ssh, nothing changes. Changed the alive test to ping. Same.
If I sniff on the target (one of mine) I see the icmp probes coming, no tcp probes coming.
If I enter the openvas container and start an nmap to the machine where I'm sniffing, I see the nmap SYNs.

@immauss
Copy link
Owner

immauss commented Jan 11, 2024

I'm stumped ....
I have that same version scanning hosts on multiple installs ....

If you roll back to a previous version, 22.4.37 or 22.4.36, does the problem persist?

-Scott

@LoZio
Copy link
Author

LoZio commented Jan 11, 2024

I tried 22.4.36 before writing this post, same behavior (and still have the container if needed for tests) and :latest on a different host (physical), same odd behavior

@immauss
Copy link
Owner

immauss commented Jan 11, 2024

Was 22.4.36 working for you previously?

.38 was a big change since I moved to a new base image, which was my concern...

-Scott

@LoZio
Copy link
Author

LoZio commented Jan 11, 2024

I did my last upgrade at the beginning of december using :latest as I usually do, and did a scan with no problem. Then after the holidays I upgraded to latest .37. I don't know which was the last version that worked. If you give me the tag for a pre-big-changes version I'll run a new dedicated container later in the afternoon. I'll use the same host to avoid changing too many things together

@immauss
Copy link
Owner

immauss commented Jan 11, 2024

Hmmm .38 should have been the most recent "big" change.
But....

.35 was pushed on 22 November with .34 only existing briefly. So maybe go all the way back to .33 ?
Most of the changes in there were minor version updates from GB and some changes with the timeouts for the healthcheck script.

-Scott

@LoZio
Copy link
Author

LoZio commented Jan 11, 2024

Launched a .33 container and it works as always did!
image

.33 is good!
Edit: I'm running a sync just to check that there's noting related to NVTs or something, then re-launch the scan

@immauss
Copy link
Owner

immauss commented Jan 11, 2024

Well ...
That is odd .....
But now I know where to look.
Thanks.

Let me know how it goes after the sync, because you are right, that could have an effect was well.

-Scott

@LoZio
Copy link
Author

LoZio commented Jan 11, 2024

Second scan after feed sync was good, ports found open.

@immauss
Copy link
Owner

immauss commented Jan 11, 2024

OK ...
Let me take a look at what else changed there, and I'll see if I can come up with something.
Might take me a day or two though.
Sorry ...

-Scott

@LoZio
Copy link
Author

LoZio commented Jan 11, 2024

take your time, you're doing a great job no need to hurry

@Vict0rC
Copy link

Vict0rC commented Jan 12, 2024

Hello immauss, just reporting - same problem after upg from working 22.4.28 to 22.4.38. No ports at all...
Thanks for great work!

@immauss
Copy link
Owner

immauss commented Jan 14, 2024

@Vict0rC and/or @LoZio
Could you please run the following.

docker exec -it openvas sed -i "s/level=.*$/level=128/" /etc/gvm/openvas_log.conf

(assumes the container name is openvas)
Then restart the container.

This will enable debug logging from openvas.

Once it is done restarting:

Then please run a scan, and check the logs and attach logs here.

Thanks,
-Scott

@LoZio
Copy link
Author

LoZio commented Jan 15, 2024

I suppose you wanted to modify
/etc/openvas/openvas_log.conf
I updated it setting log level to 128 in all sections.
I created a new scan task directed to the scannable docker instance. Of course icmp ok and 0 ports.
This is the full GVMD.log from the task creation to the completion

event task:MESSAGE:2024-01-15 09h03.14 CET:901: Status of task  (00f93074-6951-4c0d-94b5-9f24eb177b95) has changed to New
event task:MESSAGE:2024-01-15 09h03.14 CET:901: Task 172.17.0.3 (00f93074-6951-4c0d-94b5-9f24eb177b95) has been created by admin
event task:MESSAGE:2024-01-15 09h03.18 CET:935: Status of task 172.17.0.3 (00f93074-6951-4c0d-94b5-9f24eb177b95) has changed to Requested
event task:MESSAGE:2024-01-15 09h03.18 CET:935: Task 172.17.0.3 (00f93074-6951-4c0d-94b5-9f24eb177b95) has been requested to start by admin
event task:MESSAGE:2024-01-15 09h03.28 CET:938: Status of task 172.17.0.3 (00f93074-6951-4c0d-94b5-9f24eb177b95) has changed to Queued
event task:MESSAGE:2024-01-15 09h03.38 CET:938: Status of task 172.17.0.3 (00f93074-6951-4c0d-94b5-9f24eb177b95) has changed to Running
md   main:MESSAGE:2024-01-15 08h03.57 utc:1138:    Greenbone Vulnerability Manager version 23.1.0 (DB revision 255)
md manage:   INFO:2024-01-15 08h03.57 utc:1138:    Getting scanners.
md   main:MESSAGE:2024-01-15 08h04.04 utc:1156:    Greenbone Vulnerability Manager version 23.1.0 (DB revision 255)
md manage:   INFO:2024-01-15 08h04.04 utc:1156:    Verifying scanner.
event task:MESSAGE:2024-01-15 08h06.50 UTC:938: Status of task 172.17.0.3 (00f93074-6951-4c0d-94b5-9f24eb177b95) has changed to Processing
event task:MESSAGE:2024-01-15 08h06.54 UTC:938: Status of task 172.17.0.3 (00f93074-6951-4c0d-94b5-9f24eb177b95) has changed to Done
md   main:MESSAGE:2024-01-15 08h09.11 utc:1663:    Greenbone Vulnerability Manager version 23.1.0 (DB revision 255)
md manage:   INFO:2024-01-15 08h09.11 utc:1663:    Getting scanners.
md   main:MESSAGE:2024-01-15 08h09.17 utc:1671:    Greenbone Vulnerability Manager version 23.1.0 (DB revision 255)
md manage:   INFO:2024-01-15 08h09.17 utc:1671:    Verifying scanner.

this is ospd-openvas.log

OSPD[463] 2024-01-15 07:57:46,901: INFO: (ospd_openvas.daemon) VTs were up to date. Feed version is 202401080618.
OSPD[463] 2024-01-15 08:03:28,499: INFO: (ospd.command.command) Scan 44761cf9-007c-4c3e-98e5-5a46df30264b added to the queue in position 2.
OSPD[463] 2024-01-15 08:03:34,446: INFO: (ospd.ospd) Currently 1 queued scans.
OSPD[463] 2024-01-15 08:03:34,749: INFO: (ospd.ospd) Starting scan 44761cf9-007c-4c3e-98e5-5a46df30264b.
OSPD[463] 2024-01-15 08:06:47,143: INFO: (ospd.ospd) 44761cf9-007c-4c3e-98e5-5a46df30264b: Host scan finished.
OSPD[463] 2024-01-15 08:06:47,149: INFO: (ospd.ospd) 44761cf9-007c-4c3e-98e5-5a46df30264b: Scan finished.

Attached the openvas.log that was the beefier one
openvas.log

@immauss
Copy link
Owner

immauss commented Jan 15, 2024

I'm still drawing a blank ... but GB released some minor updates ....
WFFM
which means little but ...
22.4.39

Please let me know if it does anything different.

Thanks,
-Scott

@immauss
Copy link
Owner

immauss commented Jan 16, 2024

OK ...
This is a long shot ... but ...

please use this docker-compose.yml

version: "3"
services:
  openvas:
    ports:
      - "8080:9392"
    environment:
      - "PASSWORD=admin"
      - "USERNAME=admin"
      - "RELAYHOST=172.17.0.1"
      - "SMTPPORT=25"
      - "REDISDBS=512" # number of Redis DBs to use
      - "QUIET=false"  # dump feed sync noise to /dev/null
      - "NEWDB=false"  # only use this for creating a blank DB 
      - "SKIPSYNC=true" # Skips the feed sync on startup.
      - "RESTORE=false"  # This probably not be used from compose... see docs.
      - "DEBUG=false"  # This will cause the container to stop and not actually start gvmd
      - "HTTPS=false"  # wether to use HTTPS or not
    volumes:
      - "openvas-test:/data"
    cap_add:
      - NET_ADMIN # for capturing packages in promiscuous mode
      - NET_RAW # for raw sockets e.g. used for the boreas alive detection
    container_name: openvas
    image: immauss/openvas:beta
  scannable:
    image: immauss/scannable
    container_name: scannable
volumes:
  openvas-test:

Login and create a target for the scannable image with:
user: scannable
pssw: Passw0rd

Create and run a scan with default options for the scannable target.

This image and the compose file add the network capabilities to the image. I've never found a need for it before ... but maybe this is it.

-Scott

@LoZio
Copy link
Author

LoZio commented Jan 18, 2024

@immauss I'll try as soon as I have time but I think this is not the problem. If I run a shell from the openvas container that does not scan, and manually start an nmap against the external network it works fine. So from the network side the :latest image is ok methinks.

@immauss
Copy link
Owner

immauss commented Jan 18, 2024

@LoZio you are probably right. There is something that change after 22.4.33 ... read through #241
It was hunch to ask him to try that, that I didn't really expect to work, but it did. I suspect it is the same issue, I just have no idea what that is yet. As I said there, it might take a bit as I'll need to dedicate some time to figure out what changed there. My initial look with github didn't show me anything serious changing in that time frame.
The more difficult part is not being able to reproduce it on my end....

Is there anything unusual or unique about your setup that might make things different? Disk type? Virtual machine? Different docker drivers for something? ( know ... I'm really reaching here...)

Thanks,
-Scott

@LoZio
Copy link
Author

LoZio commented Jan 18, 2024

Nothing special, the problem first surfaced on an ubuntu VM running in VMWare. Then I had the problem on a physical Dell server. Both had the previuos versions running with no problems.
I don't know if you can in some way track the starting of the nmap instance, because I think that for some reason it is simply not running, or it is invoked with borked parameters and returns immediatly with no results.
As I said I can use the docker container to manually run nmap with no problem, so if someone starts that it will run. Maybe it is started from a non privileged user and it needs to be added the capabilities for the net_raw (setcap on the nmap binary).

@immauss
Copy link
Owner

immauss commented Jan 18, 2024

nmap doesn't always run.
Openvas can do the port scans via nasl scripts as well, and I think that is default. I honestly have not gone that deep in a long time, so I'm not sure.

@immauss
Copy link
Owner

immauss commented Jan 18, 2024

I'm pretty sure you answered this already, but just to make sure:
You get the same results on a brand new clean database right?

-Scott

@LoZio
Copy link
Author

LoZio commented Jan 18, 2024

Yes, clean database and updated one.

@LoZio
Copy link
Author

LoZio commented Jan 18, 2024

I just had time to test with your dockerfile above. Started it, no update to the DB (say 3 days old).
Started against scannable target 172.19.0.2.
It runs the nasl scripts, at least some
image
But again it diesn't find the open ports. I confirm I can ssh to the scannable container, so it is up and running fine
image

There's only an error in the log, don't know how much serious


openvas-test  | ==> /usr/local/var/log/gvm/openvas.log <==
openvas-test  | sd   main:MESSAGE:2024-01-18 13h20.13 utc:1434:f1fc9645-3cc6-466f-947b-bded74e05e84: openvas 22.7.9 started
openvas-test  | sd   main:MESSAGE:2024-01-18 13h20.19 utc:1434:f1fc9645-3cc6-466f-947b-bded74e05e84: Vulnerability scan f1fc9645-3cc6-466f-947b-bded74e05e84 started: Target has 1 hosts: 172.19.0.2, with max_hosts = 20 and max_checks = 4
openvas-test  | libgvm boreas:MESSAGE:2024-01-18 13h20.19 utc:1434:f1fc9645-3cc6-466f-947b-bded74e05e84: Alive scan f1fc9645-3cc6-466f-947b-bded74e05e84 started: Target has 1 hosts
openvas-test  | sd   main:MESSAGE:2024-01-18 13h20.20 utc:1459:f1fc9645-3cc6-466f-947b-bded74e05e84: Vulnerability scan f1fc9645-3cc6-466f-947b-bded74e05e84 started for host: 172.19.0.2 (Vhosts: scannable.ovastest_default)
openvas-test  | :WARNING:2024-01-18 13h20.20 utc:1460:f1fc9645-3cc6-466f-947b-bded74e05e84: nasl_pread: Failed to close file descriptor for child process (Operation not permitted)
openvas-test  | libgvm boreas:MESSAGE:2024-01-18 13h20.22 utc:1434:f1fc9645-3cc6-466f-947b-bded74e05e84: Alive scan f1fc9645-3cc6-466f-947b-bded74e05e84 finished in 3 seconds: 1 alive hosts of 1.
openvas-test  | sd   main:MESSAGE:2024-01-18 13h22.48 utc:1459:f1fc9645-3cc6-466f-947b-bded74e05e84: Vulnerability scan f1fc9645-3cc6-466f-947b-bded74e05e84 finished for host 172.19.0.2 in 148.01 seconds
openvas-test  | sd   main:MESSAGE:2024-01-18 13h22.48 utc:1434:f1fc9645-3cc6-466f-947b-bded74e05e84: Vulnerability scan f1fc9645-3cc6-466f-947b-bded74e05e84 finished in 155 seconds: 1 alive hosts of 1
openvas-test  |
openvas-test  | ==> /usr/local/var/log/gvm/ospd-openvas.log <==
openvas-test  | OSPD[679] 2024-01-18 13:22:50,490: INFO: (ospd.ospd) f1fc9645-3cc6-466f-947b-bded74e05e84: Host scan finished.
openvas-test  | OSPD[679] 2024-01-18 13:22:50,493: INFO: (ospd.ospd) f1fc9645-3cc6-466f-947b-bded74e05e84: Scan finished.

@immauss
Copy link
Owner

immauss commented Jan 18, 2024

yup ....

nasl_pread: Failed to close file descriptor 

Same as #241

OK ... at least now I know I'm only fighting one issue...

Curios ...
On your ubuntu install ...

Do you run docker as a regular user, or as root?

-Scott

@LoZio
Copy link
Author

LoZio commented Jan 29, 2024

image

SELinux is disabled. Just to be clear about what i wrote above, on the same docker installation the older container runs fine.
The other server is a Debian 11, I have no access to it now. I don't thinks it has selinux in enforce by default.
The setup for those hosts is trivial, they are dedicated to this task. I follow each step of installation with no changes and install docker like this https://docs.docker.com/engine/install/debian/. Just the IP is set during the setup of the server.

@immauss
Copy link
Owner

immauss commented Jan 29, 2024

Sorry ... Ubuntu uses AppArmor ... not SELinux.
Any chance AppArmor is throwing an error?

 nasl_pread: Failed to close file descriptor for child process (Operation not permitted)

Anytime there's an "Operation not permitted", it generally falls back to permissions. Or SELinux/Apparmor.)

-Scott

@LoZio
Copy link
Author

LoZio commented Jan 29, 2024

Just removed/purged apparmor altogether. No traces of apparmor anymore.
Same behavior.

@immauss
Copy link
Owner

immauss commented Feb 1, 2024

@LoZio I'm still stumped on this one. By chance, is there any way you could give me access to an environment where you are having this problem? Not being able to reproduce the issue on my end makes it incredibly difficult to troubleshoot. If not, no worries. I'll keep trying to think of something ...

If yes, please contact me via my company site:
Immauss Cybersecurity

-Scott

@thielsn
Copy link

thielsn commented Feb 2, 2024

Hi,

I've also experienced the issue with no ports found when using the latest docker image, while it was still working by the end of December.

In "Scan Configs" in "Configuration", I noticed that for "Full and fast" the port scanner was deactivated. After activating it I also got the error in the logs as in #241.

After searching for the error message ("Failed to close file descriptor for child process (Operation not permitted)"), I've eventually found this suggestion: https://gist.github.com/nathabonfim59/b088db8752673e1e7acace8806390242

Starting the docker with --security-opt seccomp=unconfined fixed the issue for me.

This is probably not safe (secure) for production, but the linked resources might give a clue what's actually the reason for this error.

@LoZio
Copy link
Author

LoZio commented Feb 2, 2024

@LoZio I'm still stumped on this one. By chance, is there any way you could give me access to an environment where you are having this problem? Not being able to reproduce the issue on my end makes it incredibly difficult to troubleshoot. If not, no worries. I'll keep trying to think of something ...

If yes, please contact me via my company site: Immauss Cybersecurity

-Scott

I used the form the other day, did you receive the message? Not to hurry, just to check,

@LoZio
Copy link
Author

LoZio commented Feb 2, 2024

@thielsn you nailed it!
I added this
image
to the dockerfile, that is the same as adding --security-opt seccomp=unconfined to the docker cli
and re-run the existing scan and got the results.
image
I'm not skilled enough to dig for a solution but this definitely seems to be the right path

@immauss
Copy link
Owner

immauss commented Feb 2, 2024

As this option basically allows the container to do ANYTHING ( unconfined) as apposed to the default of limiting the capabilities of the container, I do not recommend implementing this unless absolutely necessary. I'm going to research how to better resolve this, but it does confirm my suspicion of a permissions issue. Now to understand why it is on some systems and not others.

@thielsn Thank you !!

@immauss
Copy link
Owner

immauss commented Feb 4, 2024

openvas.json
OK .... lets give this a try. Put the attached file (openvas.json) in the same folder as the docker-compose.yml
then .. .change the security-opt section in the docer-compose.yml to:

services:
  openvas:
    security_opt:
      - seccomp:openvas.json
    ports:
      - "8080:9392"

Then ...

docker-compose up -d

You might need to remove the container first ... but just up -d "should" work ...

This was generated using:
https://github.com/containers/oci-seccomp-bpf-hook

Starting with the "default.json" as the input. It added a lot of what I believe are redundant bits, and only one syscall. pivot_root. Which .... I don't know why it would need. If this works, I'll likely be even more confused. But it's a start.

Please let know if this solves it for you.

Thanks,
-Scott

@LoZio
Copy link
Author

LoZio commented Feb 5, 2024

Not working, I created a new container.
image

I noticed a line in the logs, don't know if it was there before:

openvas-lozio3  | chown: invalid user: ‘gvm:gvm’  <====== this one ============
openvas-lozio3  | cp: cannot stat '/var/lib/gvm/*': No such file or directory
openvas-lozio3  | cp: cannot stat '/var/lib/notus/*': No such file or directory
openvas-lozio3  | cp: cannot stat '/var/lib/openvas/*': No such file or directory
openvas-lozio3  | cp: cannot stat '/etc/gvm/*': No such file or directory
openvas-lozio3  | cp: cannot stat '/usr/local/etc/openvas/*': No such file or directory
openvas-lozio3  | Choosing container start method from:

Maybe during the setup son dir does not have the correct permissions after that.

@immauss
Copy link
Owner

immauss commented Feb 5, 2024

That should have been there before. It will only show the first time you start the container ...

OK ...

Thanks,
Scott

@immauss
Copy link
Owner

immauss commented Feb 6, 2024

ok ... Let's try this one.

Never mind ... that iteration works with podman ... but not docker for some reason ...

Stay tuned.

Thanks,
Scott

@immauss
Copy link
Owner

immauss commented Feb 6, 2024

OK .. this one is generated with the (hopefully) the correct permissions on the original container and with the default as an input.

openvas.json

-Scott

@LoZio
Copy link
Author

LoZio commented Feb 8, 2024

New container, new json. Still no ports.
image

@immauss
Copy link
Owner

immauss commented Feb 9, 2024

@thielsn
Can you describe for me in as much detail as possible the environment where you are seeing this issue? Can you also provide the output of :

docker info

Thanks,
Scott

@thielsn
Copy link

thielsn commented Feb 9, 2024

Hi Scott,

happy to help. Please find attached:
host_info.txt

Didn't have much time for further investigations, though. But it could have to do something with glibc, following the source linked in my previous post: https://gist.github.com/nathabonfim59/b088db8752673e1e7acace8806390242

Many thanks for the good work!

Regards
Simon

@immauss
Copy link
Owner

immauss commented Feb 9, 2024

@thielsn Good point.... can you please send the output from:

ldd --version

@LoZio ^^ please.

Thanks,
Scott

@immauss
Copy link
Owner

immauss commented Feb 9, 2024

So ...

After trying a few tools/options/BS to build a profile ... that didn't work ... I took the 'default' profile, and made one change. I changed the default action from ALLOW to LOG. (This was actually the first thing I tried, but it didn't work on the machine I was testing from ... ) What this should do: all of the rules that are specified, would pass with no issue, but anything that was not in the rules, would get a logged to the audit log. Only one showed up in the logs.

pread

Maybe this sounds familiar from the "nasl_pread" error messages ?

A quick check of the default profile reveals that, indeed, the "pread" syscall is not there. There are entries for pread64, preadv, & preadv2. But not for just "pread". So.... I've added it to this profile. Now ... that "shouldn't" make a difference. On a 64 bit system, it should be using the pread64. Or the system should be automagically converting pread to pread64.

Anyway ... let's give this one a try and see if it makes the difference. If it does, there is still the question of why this causes a failure on some systems and not others. I suspect it has something to do with how those kernels/libc handle the pread.

default.json

either rename this to openvas.json or change the seccomp entry in the docker-compose.yml to "default.json"

@LoZio
Copy link
Author

LoZio commented Feb 12, 2024

replaced the file as openvas.json, no ports
image

@immauss
Copy link
Owner

immauss commented Feb 12, 2024

Thanks. I was able to replicate the issue this weekend, but with Ubuntu 20.04.06, and I've spent (entirely too much) time trying to isolate the issue.

Findings:

syscall 17 (aka pread) gets logged when the seccomp policy default is set to log. This is direct conflict with how the seccomp profile "should" work as the pread syscall is part of a seccomp rule that allows it specifically in the default and in custom profiles I've created during testing. Other syscalls are also logged: 18 pwrite, and 436 close_range. On the older Ubuntu 20.04.06 system, the close_range syscall is unknown.

I have so far tried:

In short, There does not seem to be a good way to resolve this with seccomp profiles, and the more I read about seccomp profiles, the less I'm inclined to believe it is the correct way to go.

For the foreseeable future, my recommendation will be to use the default if possible, but if the nasl_pread error is present, to use seccomp=unconfined. I'm not happy with that ... but at the moment it seems to be the only option, and I've quite honestly lost entirely too much time on this. I will also say that this seems to be limited to Debian based systems as I'm not seeing it on RedHat based systems at all, which leads me to believe it might even be kernel based. :/

I'm going to leave this open though until I find a better solution.

@immauss immauss added bug Something isn't working wontfix This will not be worked on and removed wontfix This will not be worked on labels Feb 12, 2024
@LoZio
Copy link
Author

LoZio commented Feb 12, 2024

Thank you @immauss for your efforts and @thielsn for finding this.

@immauss
Copy link
Owner

immauss commented Feb 12, 2024

on a whim ... I also just tried several other kernels from Ubuntu. Everything from the oldest I could find ( 5.4.0-89 )to the newest ( 5.15.0-94) and a few in between. No change. SO ... probably not kernel either ....

@LoZio Thanks for the reminder.
Big thanks to @thielsn for getting me in the right direction and finding a workaround.

And @LoZio Thanks for the patience and help.

-Scott

@immauss immauss changed the title No port/services scanning after update nasl_pread: Failed to close file descriptor (only on certain ) May 9, 2024
@immauss immauss changed the title nasl_pread: Failed to close file descriptor (only on certain ) nasl_pread: Failed to close file descriptor (only on certain systems) May 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants