Skip to content

Commit

Permalink
Utils/FileLoading: Fix LoadFileImpl
Browse files Browse the repository at this point in the history
It is not an error that pread returns /less/ than what was requested. In
fact it's very common for the Linux kernel to return less than the data
requested from procfs.

procfs keeps coming back to bite this function, previously it was fstat
returning size of 0 which it hit. Now it only feeds data as much as it
wants per loop. In particular /proc/self/maps would only read ~3k bytes
on my system, but not be complete.

To fully fix the issue, always make sure to keep reading until there is
either an error OR zero is reached!
  • Loading branch information
Sonicadvance1 committed Dec 14, 2024
1 parent c902b88 commit 65bf1e3
Showing 1 changed file with 9 additions and 5 deletions.
14 changes: 9 additions & 5 deletions FEXCore/Source/Utils/FileLoading.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -31,24 +31,28 @@ static bool LoadFileImpl(T& Data, const fextl::string& Filepath, size_t FixedSiz
FileSize = FixedSize;
}

ssize_t CurrentOffset = 0;
ssize_t Read = -1;
bool LoadedFile {};
if (FileSize) {
// File size is known upfront
Data.resize(FileSize);
Read = pread(FD, &Data.at(0), FileSize, 0);
while ((Read = pread(FD, &Data.at(CurrentOffset), FileSize, 0)) > 0) {
CurrentOffset += Read;
}

LoadedFile = Read == FileSize;
LoadedFile = CurrentOffset == FileSize && Read != -1;
} else {
// The file is either empty or its size is unknown (e.g. procfs data).
// Try reading in chunks instead
ssize_t CurrentOffset = 0;
constexpr size_t READ_SIZE = 4096;
Data.resize(READ_SIZE);

while ((Read = pread(FD, &Data.at(CurrentOffset), READ_SIZE, CurrentOffset)) == READ_SIZE) {
while ((Read = pread(FD, &Data.at(CurrentOffset), READ_SIZE, CurrentOffset)) > 0) {
CurrentOffset += Read;
Data.resize(CurrentOffset + Read);
if ((CurrentOffset + READ_SIZE) > Data.size()) {
Data.resize(CurrentOffset + READ_SIZE);
}
}

if (Read == -1) {
Expand Down

0 comments on commit 65bf1e3

Please sign in to comment.