The file system is really just a b-tree. If you’re concerned about using memory, you can implement a O(log n) map using the file system, where the entires are the different critical sections.
Every node is a folder and every file is a leaf. Many package managers implement maps like this.
I’d like to propose a second alternative. You can use a script that’s responsible for creating the file of commands. The file is actually a FIFO pipe. The script streams data to the file, but to save resources it stops when the file/buffer reaches a certain size. The command-running code works in a different process.
Python has good capability for handling streams.
Chris Angelico writes:
On Wed, Oct 10, 2018 at 5:09 AM Stephen J. TurnbullSure, but knowing how your system works is far more important. Eg,create a 1TB file on a POSIX system, delete it while a process stillhas it opened, and it doesn't matter how you process the output of duor ls, you still have 1TB of used file space not accounted for. Thesame applies to swapfiles. But "df" knows and will tell you.In fact, "ps" will tell you how much shared memory a process is using.I just don't see a problem here, on the "I'm not getting the data Ineed" side. You do have access to the data you need.
Chris Angelico writes:
Both processes are using the virtual memory. Either or both could be
using physical memory. Assuming they haven't written to the pages
(which is the case with executables - the system mmaps the binary into
your memory space as read-only), and assuming that those pages are
backed by physical memory, which process is using that memory?
One doesn't know. Clever side-channel attacks aside, I don't care,
and I don't see how it could matter.
It matters a lot when you're trying to figure out what your system
You add up the right things, of course, and avoid paradoxes.The disk quota enforcement problem is indeed hard. This sounds to melike a special problem studied in cost accounting, a problem which wassolved (a computation that satisfies certain axioms was shown to existand be unique) in a sense by Aumann and Shapley in the 1970s. The A-Sprices have been used by telephone carriers to allocate costs of fixedassets with capacity constraints to individual calls, though I don'tknow if the method is still in use. I'm not sure if the disk quotaproblem fits the A-S theorem (which imposes certain monotonicityconditions), but the general paradigm does.However, the quota problem (and in general, the problem of allocationof overhead) is "hard" even if you have complete information about thesystem, because it's a values problem: what events are bad? whatevents are worse? what events are unacceptable (result in bankruptcyand abandonment of the system)? Getting very complete, accurateinformation about the physical consequences of individual events inthe system (linking to a file on disk, allocating a large quantity ofviritual memory) is not difficult, in the sense that you throw moneyand engineers at it, and you get "df". Getting very complete,accurate information about the values you're trying to satisfy ispossible only for an omniscient god, even if, as in business, they canbe measured in currency units.Steve_______________________________________________Python-ideas mailing listPythonfirstname.lastname@example.org://mail.python.org/mailman/listinfo/python-ideasCode of Conduct: http://python.org/psf/codeofconduct/
Tell me, which process is responsible for libc being in memory?
Other than, like, all of them?
Yes. Why would you want a different answer?
Because that would mean that I have way more *physical* memory in use
than I actually have chips on the motherboard for.
No, that's like saying that because you have multiple links to a file
on disk you're using more physical disk than you have.
Actually, that's exactly the same problem, with exactly the same
consequences. How do you figure out why your disk is full? How do you
enforce disk quotas? How can you get any sort of reasonable metrics
about anything when the sum of everything vastly exceeds the actual