1
0
mirror of synced 2024-11-23 23:31:02 +01:00

Fix process memory provider on Linux (#1933)

### Problem description
The process memory provider currently doesn't function correctly on
Linux due to incorrect handling of the special procfs file
`/proc/<pid>/maps`. I don't know if some of this behavior could vary by
distro and/or kernel version, but I've observed the following issues in
my Ubuntu 24.04 environment.

- The current code in master calls `file.readString()` which attempts to
determine the size of the file by [seeking to the
end](https://github.com/WerWolv/libwolv/blob/master/libs/io/source/io/file_unix.cpp#L148).
However, procfs files don't have a defined size, so this fails with a
return of -1. libwolv [interprets this as the file size and attempts to
allocate an enormous
buffer](https://github.com/WerWolv/libwolv/blob/master/libs/io/source/io/file.cpp#L30),
which results in an exception, so ultimately the process memory provider
is unusable on the current code.
- The previous version of the code that went out in 1.35.4 was calling
`readString` with a fixed maximum size of `0xF'FFFF`. This avoids the
seek issue, but when working with special files, a single `read` call
isn't guaranteed to read the requested number of bytes even if that many
bytes are available. In practice, on my machine, this call only ever
reads the first few dozen lines of the file. So the feature works in
this version, but it's unable to see the vast majority of the process'
address space.
- On a more minor note, on rows in the `maps` file that have a filename,
the filenames are visually aligned by padding spaces between the inode
column and filename column. ImHex includes these spaces as part of the
filename, resulting in most of the path being pushed out of the visible
area of the window.

### Implementation description

- To ensure the entire `maps` file is read, I've changed the code to
read from the file in a loop until we stop getting data. I've also set a
fixed limit on the maximum number of bytes to read in one go to avoid
issues with trying to determine the file size.
- I've added a `trim` call to remove any padding around the filename.

### Screenshots
Exception in `file.readString()` in current code (for some reason this
also causes the window to become transparent):

![mem_regions_exception](https://github.com/user-attachments/assets/ac9f472b-3d60-446d-be9c-b028b041e547)

Abridged memory region list in 1.35.4:

![mem_regions_truncated](https://github.com/user-attachments/assets/44e60b23-49f8-41b9-a56b-54cb5c82ee72)

Complete memory region list after this PR:

![mem_regions_working](https://github.com/user-attachments/assets/bdb42dc6-bcd3-42b1-b605-a233b98e8d2e)

### Additional things
I was focused on fixing this ImHex feature here, but I wonder if some of
this should be addressed in libwolv. Maybe `readBuffer` in file_unix.cpp
should read in a loop until it has the requested number of bytes or
encounters EOF/error?

---------

Co-authored-by: Justus Garbe <55301990+jumanji144@users.noreply.github.com>
This commit is contained in:
descawed 2024-11-07 07:41:04 -05:00 committed by GitHub
parent 592f613a61
commit 6d14b3f6bd
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -424,7 +424,16 @@ namespace hex::plugin::builtin {
if (!file.isValid())
return;
for (const auto &line : wolv::util::splitString(file.readString(), "\n")) {
// procfs files don't have a defined size, so we have to just keep reading until we stop getting data
std::string data;
while (true) {
auto chunk = file.readString(0xFFFF);
if (chunk.empty())
break;
data.append(chunk);
}
for (const auto &line : wolv::util::splitString(data, "\n")) {
const auto &split = splitString(line, " ");
if (split.size() < 5)
continue;
@ -434,7 +443,7 @@ namespace hex::plugin::builtin {
std::string name;
if (split.size() > 5)
name = combineStrings(std::vector(split.begin() + 5, split.end()), " ");
name = wolv::util::trim(combineStrings(std::vector(split.begin() + 5, split.end()), " "));
m_memoryRegions.insert({ { start, end - start }, name });
}