This post has been de-listed
It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.
I have a lua script on Linux which in some cases it allocates around 1.3GB of memory. It also calls external command line tools and I noticed that it is using A LOT of kernel CPU time when doing so. The problem is visible only when the lua script is using a lot of memory.
I ran "sudo perf record -g -p PID" on the program and then "sudo perf report --stdio" and I noticed a very time consuming call to "copy_page_range" which passes through "_do_fork". My guess is that os.execute forks the process and tries to create clones of the memory pages of the script process for the child process.
Is there a way to avoid this cost? I don't even understand why the memory space of the parent needs to be given to the child process but I could be wrong about this being the case. (And I was under the impression that fork() on Linux does not copy the actual memory but instead simply maps it to the memory space of the child and only once the child starts modifying it it will copy and write it back as a new page)
Example code to demonstrate the issue:
> local s=os.time() for i=1,100 do os.execute("true") end print(os.time()-s)
0
> a=string.rep("a",1024^2*500) local s=os.time() for i=1,100 do os.execute("true") end print(os.time()-s)
4
> local s=os.time() for i=1,100 do os.execute("true") end print(os.time()-s)
4
Calling os.execute("true") (which does nothing) 100 times takes less than a second. If you first allocate 500MB of memory, it now consistently takes 4 seconds to call 100 times os.execute("true").
EDIT: Tested on lua 5.2.4 and luajit on Linux Mint if it matters.
EDIT2: Seems related: https://stackoverflow.com/a/28040596
Post Details
- Posted
- 4 years ago
- Reddit URL
- View post on reddit.com
- External URL
- reddit.com/r/lua/comment...