队列
Pydantic 对于验证进入和离开队列的数据非常有用。下面,我们将探讨如何使用各种队列系统验证/序列化数据。
Redis 队列¶
Redis 是一种流行的内存数据结构存储。
为了在本地运行此示例,您首先需要安装 Redis 并在本地启动您的服务器。
下面是一个简单的示例,说明如何使用 Pydantic 来
- 序列化数据以推送到队列
- 从队列中弹出数据时反序列化和验证数据
import redis
from pydantic import BaseModel, EmailStr
class User(BaseModel):
id: int
name: str
email: EmailStr
r = redis.Redis(host='localhost', port=6379, db=0)
QUEUE_NAME = 'user_queue'
def push_to_queue(user_data: User) -> None:
serialized_data = user_data.model_dump_json()
r.rpush(QUEUE_NAME, serialized_data)
print(f'Added to queue: {serialized_data}')
user1 = User(id=1, name='John Doe', email='[email protected]')
user2 = User(id=2, name='Jane Doe', email='[email protected]')
push_to_queue(user1)
#> Added to queue: {"id":1,"name":"John Doe","email":"[email protected]"}
push_to_queue(user2)
#> Added to queue: {"id":2,"name":"Jane Doe","email":"[email protected]"}
def pop_from_queue() -> None:
data = r.lpop(QUEUE_NAME)
if data:
user = User.model_validate_json(data)
print(f'Validated user: {repr(user)}')
else:
print('Queue is empty')
pop_from_queue()
#> Validated user: User(id=1, name='John Doe', email='[email protected]')
pop_from_queue()
#> Validated user: User(id=2, name='Jane Doe', email='[email protected]')
pop_from_queue()
#> Queue is empty
RabbitMQ¶
RabbitMQ 是一个流行的消息代理,它实现了 AMQP 协议。
为了在本地运行此示例,您首先需要安装 RabbitMQ 并启动您的服务器。
下面是一个简单的示例,说明如何使用 Pydantic 来
- 序列化数据以推送到队列
- 从队列中弹出数据时反序列化和验证数据
首先,让我们创建一个发送方脚本。
import pika
from pydantic import BaseModel, EmailStr
class User(BaseModel):
id: int
name: str
email: EmailStr
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
QUEUE_NAME = 'user_queue'
channel.queue_declare(queue=QUEUE_NAME)
def push_to_queue(user_data: User) -> None:
serialized_data = user_data.model_dump_json()
channel.basic_publish(
exchange='',
routing_key=QUEUE_NAME,
body=serialized_data,
)
print(f'Added to queue: {serialized_data}')
user1 = User(id=1, name='John Doe', email='[email protected]')
user2 = User(id=2, name='Jane Doe', email='[email protected]')
push_to_queue(user1)
#> Added to queue: {"id":1,"name":"John Doe","email":"[email protected]"}
push_to_queue(user2)
#> Added to queue: {"id":2,"name":"Jane Doe","email":"[email protected]"}
connection.close()
这是接收方脚本。
import pika
from pydantic import BaseModel, EmailStr
class User(BaseModel):
id: int
name: str
email: EmailStr
def main():
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
QUEUE_NAME = 'user_queue'
channel.queue_declare(queue=QUEUE_NAME)
def process_message(
ch: pika.channel.Channel,
method: pika.spec.Basic.Deliver,
properties: pika.spec.BasicProperties,
body: bytes,
):
user = User.model_validate_json(body)
print(f'Validated user: {repr(user)}')
ch.basic_ack(delivery_tag=method.delivery_tag)
channel.basic_consume(queue=QUEUE_NAME, on_message_callback=process_message)
channel.start_consuming()
if __name__ == '__main__':
try:
main()
except KeyboardInterrupt:
pass
要测试此示例
- 在一个终端中运行接收方脚本以启动消费者。
- 在另一个终端中运行发送方脚本以发送消息。
ARQ¶
ARQ 是一个快速的、基于 Redis 的 Python 作业队列。它构建在 Redis 之上,提供了一种处理后台任务的简单方法。
为了在本地运行此示例,您需要安装 Redis 并启动您的服务器。
下面是一个简单的示例,说明如何将 Pydantic 与 ARQ 一起使用,以
- 为您的作业数据定义模型
- 在将作业排队时序列化数据
- 在处理作业时验证和反序列化数据
import asyncio
from typing import Any
from arq import create_pool
from arq.connections import RedisSettings
from pydantic import BaseModel, EmailStr
class User(BaseModel):
id: int
name: str
email: EmailStr
REDIS_SETTINGS = RedisSettings()
async def process_user(ctx: dict[str, Any], user_data: dict[str, Any]) -> None:
user = User.model_validate(user_data)
print(f'Processing user: {repr(user)}')
async def enqueue_jobs(redis):
user1 = User(id=1, name='John Doe', email='[email protected]')
user2 = User(id=2, name='Jane Doe', email='[email protected]')
await redis.enqueue_job('process_user', user1.model_dump())
print(f'Enqueued user: {repr(user1)}')
await redis.enqueue_job('process_user', user2.model_dump())
print(f'Enqueued user: {repr(user2)}')
class WorkerSettings:
functions = [process_user]
redis_settings = REDIS_SETTINGS
async def main():
redis = await create_pool(REDIS_SETTINGS)
await enqueue_jobs(redis)
if __name__ == '__main__':
asyncio.run(main())
此脚本已完成。它应该“原样”运行,既用于将作业排队,也用于处理它们。