0x1949 Team - FAZEMRX - MANAGER
Edit File: asyncore.cpython-38.pyc
U ˧f~N � @ sh d Z ddlZddlZddlZddlZddlZddlZddlmZm Z m Z mZmZm Z mZmZmZmZmZmZmZ eee eeeeh�Zze W n ek r� i ZY nX dd� ZG dd� de�ZeeefZdd � Zd d� Z dd � Z!dd� Z"d&dd�Z#d'dd�Z$e$Z%d(dd�Z&G dd� d�Z'G dd� de'�Z(dd� Z)d)dd �Z*ej+d!k�rdG d"d#� d#�Z,G d$d%� d%e'�Z-dS )*a� Basic infrastructure for asynchronous socket service clients and servers. There are only two ways to have a program on a single processor do "more than one thing at a time". Multi-threaded programming is the simplest and most popular way to do it, but there is another very different technique, that lets you have nearly all the advantages of multi-threading, without actually using multiple threads. it's really only practical if your program is largely I/O bound. If your program is CPU bound, then pre-emptive scheduled threads are probably what you really need. Network servers are rarely CPU-bound, however. If your operating system supports the select() system call in its I/O library (and nearly all do), then you can use it to juggle multiple communication channels at once; doing other work while your I/O is taking place in the "background." Although this strategy can seem strange and complex, especially at first, it is in many ways easier to understand and control than multi-threaded programming. The module documented here solves many of the difficult problems for you, making the task of building sophisticated high-performance network servers and clients a snap. � N) �EALREADY�EINPROGRESS�EWOULDBLOCK� ECONNRESET�EINVAL�ENOTCONN� ESHUTDOWN�EISCONN�EBADF�ECONNABORTED�EPIPE�EAGAIN� errorcodec C sH zt �| �W S tttfk rB | tkr6t| Y S d| Y S X d S )NzUnknown error %s)�os�strerror� ValueError� OverflowError� NameErrorr )�err� r �/usr/lib/python3.8/asyncore.py� _strerrorD s r c @ s e Zd ZdS )�ExitNowN)�__name__� __module__�__qualname__r r r r r L s r c C s: z| � � W n( tk r" � Y n | �� Y nX d S �N)�handle_read_event�_reraised_exceptions�handle_error��objr r r �readQ s r"