I seem to be having an issue with Python when I run a script that creates a large number of sub processes. The sub process creation code looks similar to:
当我运行一个创建大量子进程的脚本时,我似乎遇到了Python的问题。子流程创建代码类似于:
Code:
def execute(cmd, stdout=None, stderr=subprocess.STDOUT, cwd=None):
proc = subprocess.Popen(cmd, shell=True, stdout=stdout, stderr=stderr, cwd=cwd)
atexit.register(lambda: __kill_proc(proc))
return proc
The error message I am receiving is:
我收到的错误消息是:
OSError: [Errno 24] Too many open files
OSError:[Errno 24]打开的文件太多
Once this error occurs, I am unable to create any further sub processes until kill the script and start it again. I am wondering if the following line could be responsible.
一旦发生此错误,我无法创建任何进一步的子进程,直到杀死脚本并再次启动它。我想知道以下行是否可以负责。
atexit.register(lambda: __kill_proc(proc))
Could it be that this line creates a reference to the sub process, resulting in a "file" remaining open until the script exits?
可能是这行创建了对子进程的引用,导致“文件”保持打开状态直到脚本退出?
1
So it seems that the line:
所以看来这行:
atexit.register(lambda: __kill_proc(proc))
was indeed the culprit. This is probably because of the Popen reference being kept around so the process resources aren't free'd. When I removed that line the error went away. I have now changed the code as @Bakuriu suggested and am using the process' pid value rather than the Popen instance.
确实是罪魁祸首。这可能是因为保留了Popen引用,因此进程资源不是免费的。当我删除该行时,错误就消失了。我现在已经改变了代码,因为@Bakuriu建议使用进程'pid值而不是Popen实例。
0
Firstly, run ulimit -a
to find out how many the maximum open files are set in your Linux system.
首先,运行ulimit -a以了解Linux系统中设置的最大打开文件数。
Then edit the system configuration file /etc/security/limits.conf
and add those code in the bottom.
然后编辑系统配置文件/etc/security/limits.conf并在底部添加这些代码。
* - nofile 204800
* - nofile 204800
Then you can open more sub processes if you want.
然后,您可以根据需要打开更多子流程。
本站翻译的文章,版权归属于本站,未经许可禁止转摘,转摘请注明本文地址:http://www.silva-art.net/blog/2012/09/21/aad7a2949f9615f44486ab7f7c2bfb77.html。