- Hands-On Enterprise Application Development with Python
- Saurabh Badhwar
- 1056字
- 2021-07-02 14:38:13
Implementing a simple socket server with AsyncIO
The AsyncIO library provided by the Python implementation provides a lot of powerful functionality. One of these many functionalities is the ability to interface and manage socket communication. This provides the programmer with the ability to implement asynchronous socket handling and, hence, allows for a higher number of clients to connect to the server.
The following code sample builds a simple socket handler with the callback-based mechanism to handle the communication with the clients:
# async_socket_server.py
#!/usr/bin/python3
import asyncio
class MessageProtocol(asyncio.Protocol):
"""An asyncio protocol implementation to handle the incoming messages."""
def connection_made(self, transport):
print("Got a new connection")
self.transport = transport
def data_received(self, data):
print("Data received")
self.transport.write("Message received".encode('utf-8'))
loop = asyncio.get_event_loop()
server_handler = loop.create_server(MessageProtocol, 'localhost', 7000)
server = loop.run_until_complete(server_handler)
try:
loop.run_forever()
except KeyboardInterrupt:
server.close()
loop.close()
So, we have just implemented a socket server using AsyncIO; there's a lot of new code here. Let's take some time to understand what lies behind the scenes of these magic lines.
So, to start with the implementation, we have first defined a protocol class named MessageProtocol. This class inherits from the asyncio.Protocol class that provides a base implementation of a streaming protocol, such as TCP:
class MessageProtocol(asyncio.Protocol)
In the context of the AsyncIO library, a protocol defines how the data from the underlying socket can be dealt with. In other words, the protocol defines the abstraction for the application.
Inside the protocol, we have basically overridden the implementation of two methods, namely, connection_made and data_received. Let's take a further look at these two methods and try to understand what they do:
connection_made: The method comes from the BaseProtocol class, which is the base class for all the protocol classes inside AsyncIO and is responsible for handling the event related to the new clients connecting to the server. The implementation allows for doing extra steps based upon when a new client has joined the server. When such a connection_event takes place, the method receives as a parameter, the transport object which signifies the connection type (Streaming, Datagram, Unix Pipe) to the underlying socket. Using this transport object, we can interact with the client by performing read and write operations.
data_received: The method comes from the Protocol class of the AsyncIO library and handles the events that occur when the client has sent some data. On a data receive event, the method receives as a parameter, a bytes object which contains the data that has been sent by the client. The underlying transport determines whether the data will be buffered or unbuffered, and hence the method implementation should not make any presumptions and should provide a generic behavior for dealing with the data.
Now, once we have defined our MessageProtocol, the next thing to do is to start an asynchronous server to deal with the client connections.
To achieve the same, we start by getting an AsyncIO event loop by making a call to the get_event_loop() method.
Once we have attained the event loop, we next make a call to the create_server() coroutine of the event loop:
loop.create_server(MessageProtocol, ‘localhost', 7000)
The co-routine is responsible for starting a TCP server on the host. To the create_server() co-routine, we provide three basic parameters, a protocol class which will handle the client connections, the host on which the server should run, and the port on which the server should bind.
Since the server is an AsyncIO co-routine, the result of the execution of the server is a future object. We use this property to start our event loop and ask it to run until the server exits by calling the event loop's run_until_complete method and passing it the future object we received by the call to create_server:
server = loop.run_until_complete(server_handler)
Once this is done, we are all set to go and our server event loop has started running. The only thing that remains here is the starting of our main event loop, inside which the server will run.
To start our main event loop, we make a call to the run_forever() method of the loop and make it exit only when a KeyboardInterrupt has occurred.
With this, we get a fair enough idea that the AsyncIO is a powerful library and provides a lot of opportunities over the existing solutions based upon using multiple threads or multiple processes. But when should we go ahead with the use of AsyncIO and what are the advantages we have by going with the option? Let's look into that:
- Better resource management: Since AsyncIO uses a single thread to manage the execution of the program, it has better resource usage compared to solutions such as launching multiple threads or multiple processes because the system does not really need to maintain that much data for the individual threads or processes. Also, since there is a single thread, there is no need to do CPU-intensive context switches, which are required to transfer the control from one thread to another, or one process to another.
- Faster execution: Since the AsyncIO usually involves the use of light co-routines to execute the tasks and a single event loop, the execution of tasks is generally faster due to the reduced context switching.
- Better suited for I/O intensive tasks: The approach of AsyncIO is to switch tasks when a particular co-routine is in blocked state waiting for I/O to complete. For the tasks that involve a considerable amount of I/O, the use of AsyncIO helps to achieve a lot of scalability, because at a given time, quite a lot of tasks could be in blocked state while the others can be consuming the CPU.
Now we know about the different ways through which we can improve the ability of our enterprise applications to scale out to a larger audience. For socket-based applications, this is a good approach, specifically in this century, where more and more business workloads are now moving to the web involving a lot of complex socket communication between the client machines and the application servers.
The techniques we discussed till now will work well to scale out our enterprise web applications and help them achieve a lot of concurrency, but still there is a lot of scope for improvement. So now let's move on to understanding how we can boost the ability of enterprise web applications to handle a lot of clients.