这个多处理的python脚本有什么问题?
我正在写一个简单的消息程序(2个用户),在终端上工作。这个多处理的python脚本有什么问题?
对于它的实现,我决定创建2个进程,一个用于服务器(等待消息到达另一个用户),另一个用于客户端(只是将消息发送到另一个服务器进程用户)
的事实是,当我运行它,我得到以下错误:
C:\>python message.py
> Process Process-2:
Traceback (most recent call last):
File "C:\Python27\lib\multiprocessing\process.py", line 258, in_bootstrap
self.run()
File "C:\Python27\lib\multiprocessing\process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "C:\message.py", line 12, in send_messages
message = raw_input('> ')
EOFError
Process Process-1:
Traceback (most recent call last):
File "C:\Python27\lib\multiprocessing\process.py", line 258, in
_bootstrap
self.run()
File "C:\Python27\lib\multiprocessing\process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "C:\message.py", line 25, in receive_messages
message = sc.recv(1024)
error: [Errno 10054] An existing connection was forcibly closed by the
remote host
这是我的Python代码
from multiprocessing import Process
import socket
direction = "localhost"
global s
s = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
def send_messages():
s.connect((direction, 5124))
while True:
message = raw_input('> ')
s.send(message)
if message == 'close':
break
print 'Bye'
s.close()
def receive_messages():
s.bind(("localhost",5124))
s.listen(2)
sc, addr = s.accept()
while True:
message = sc.recv(1024)
print message
sc.close()
s.close()
if __name__ == '__main__':
p1 = Process(target = receive_messages)
p1.start()
p2 = Process(target = send_messages)
p2.start()
p1.join()
p2.join()
注1:可以有由于从我的文本编辑器剪切并粘贴到计算器,会导致一些缩进错误。
注2:我在Windows 10
当你酿出Python中的线程,it closes stdin。您不能使用子流程来收集标准输入。使用主线程来收集输入,并将它们从主线程发布到队列中。有可能将stdin传递给另一个线程,但您可能需要在主线程中关闭它。
也许你可以通过使用fdopen()在子进程中重新打开stdin来解决这个问题。 See this answer。
这里是你的代码的例子:
from multiprocessing import Process
import socket
import sys
import os
direction = "localhost"
# I got this error
# error: [Errno 106] Transport endpoint is already connected
# when I run your code ,so I made some change
# You can ignore it if everything is ok when your code run
global s1
s1 = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
global s2
s2 = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
def send_messages(fileno):
s1.connect((direction, 5104))
sys.stdin = os.fdopen(fileno) # open stdin in this process
while True:
message = ''
message = raw_input('> ')
s1.send(message)
if message == 'close':
break
print 'Bye'
s1.close()
def receive_messages():
s2.bind(("localhost",5104))
s2.listen(2)
sc, addr = s2.accept()
while True:
message = sc.recv(1024)
if message == 'close':
print 'Bye!'
break
print message
sc.close()
s2.close()
if __name__ == '__main__':
fn = sys.stdin.fileno() # get original file descriptor
p1 = Process(target = receive_messages)
p1.start()
p2 = Process(target = send_messages, args=(fn,))
p2.start()
p1.join()
p2.join()
我测试了它,和它的工作。
工作,你收到的错误基本上意味着你raw_input
正在接收一个空的输入。这种情况引发了EOFError,您可以在文档的built-in-exceptions section中阅读它。
我从来没有尝试过这样的多处理之前,但我会想象这是你的问题所在。也许在进入多进程之前,确保你的逻辑在单个进程中按预期工作,但我仍然想尝试启动多个进程来接收用户输入会是一件令人头疼的事情。
Beware of replacing sys.stdin with a “file like object”
multiprocessing originally unconditionally called:
os.close(sys.stdin.fileno())
in the
multiprocessing.Process._bootstrap()
method — this resulted in issues with processes-in-processes. This has been changed to:
sys.stdin.close()
sys.stdin = open(os.open(os.devnull, os.O_RDONLY), closefd=False)
Which solves the fundamental issue of processes colliding with each other resulting in a bad file descriptor error, but introduces a potential danger to applications which replace
sys.stdin()
with a “file-like object” with output buffering. This danger is that if multiple processes callclose()
on this file-like object, it could result in the same data being flushed to the object multiple times, resulting in corruption.
的底线是,你Process
ES被关闭stdin
除非你继承他们和避免这样做。
您可以考虑使用一个(资本P)进程来处理通信,然后该干嘛输入/输出在原始(小写)过程:
if __name__ == '__main__':
p1 = Process(target = receive_messages)
p1.start()
send_messages()
p1.join()