- 0 Posts
- 3 Comments
unhrpetby@sh.itjust.worksto linuxmemes@lemmy.world•System requirements for me and not for theeEnglish1·10 days agoYes. Memory allocated, but not written to, still counts toward your limit, unlike in overcommit modes 0 or 1.
The default is to hope that not enough applications on the system cash out on their memory and force the system OOM. You get more efficient use of memory, but I don’t like this approach.
And as a bonus, if you use overcommit 2, you get access to
vm.admin_reserve_kbytes
which allows you to reserve memory only for admin users. Quite nice.
unhrpetby@sh.itjust.worksto linuxmemes@lemmy.world•System requirements for me and not for theeEnglish0·10 days agoUnless you have the
vm.overcommit_memory
sysctl set to 2, and your overcommit is set to less than your system memory.Then, when an application requests more memory than you have available, it will just get an error instead of needing to be killed by OOM when it attempts to use the memory at a later time.
XZ was made possible largely because there was unaudited binary data. One part as test data in the repo, and the other part within the pre-built releases. Bootstrapping everything from source would have required that these binaries had an auditable source, thus allowing public eyes to review the code and likely stopping the attack. Granted, reproducibility almost certainly would have too, unless the malware wasn’t directly present in the code.
Pulled from here:
Sure you might have the code that was input into GCC to create the binary, and sure the code can be absolutely safe, and you can even compile it yourself to see that you arrive at the same bit-for-bit binary as the official release binary. But was GCC safe? Did some other compilation dependency infect the compiled binary? Bootstrapping from an auditable seed can answer this question.