Why are signals problematic?
When I was new to Django and building an application, I soon found myself wanting to lower the coupling between various parts of business logic. In particular, there was a requirement that specific changes to data needed to be logged in a way that would be accessible to users. For instance, when a user would post a message, or add another user to a project, or delete some information from a project, a log entry would be created to record the action. These log entries would have their own list view, where users would then be able to filter and sort through the events to see what had been going on in their project.
Rather than strapping in this "Create a log entry" logic everywhere in the app; I wanted to keep that separate and encapsulated. Best way I figured to do this was to rely on subscribing to events. Business logic would fire a "I-Have-Posted-A-Message" event, and then the logging routine would pick up this event and do its thing. Simple.
Rather than rolling my own system for events and handlers (Publishers and Subscribers), I looked to the Django docs to see what it had to offer. Django offers a system called signals, which - when reading through the docs - sounds like exactly what is needed. Built-in signals like pre-save and post-save on models are already working and being fired, and all we need to do is subscribe and handle them.
I started like this, and it was fine, but:
- Signals are weak by default and can fail silently
- Signals are fired and handled by the framework, so the traceability in your code is gone. Something happened and you don't know why
- The built-in signals will only get you so far. They run per-object and only include information about that object. No extra context
- You can extend and create your own signals, but then you're building code that is tied to the framework, and you have to jump through hoops to make Django happy
All in all; not a great experience
What did I end up doing?
The pattern has many names: Pub/sub, events and handlers, push/pull, Mediator pattern. All of these have distinct variations, but have the same goal: decouple events from their handlers.
Writing such a system yourself is actually super simple, unless you need some very bespoke functionality. I found a solution that works great in my projects, and solves all the pain points above.
Let's get into the details. First, I need a Mediator
class. The purpose of this class is to keep track of which handlers are registered for which type of events. After handlers have subscribed to events, the Mediator
can publish events which will then be passed to each subscribed handler:
class Mediator:
subscribers = dict()
@classmethod
def subscribe(cls, event_type, handler):
if event_type not in cls.subscribers:
cls.subscribers[event_type] = []
cls.subscribers[event_type].append(handler())
@classmethod
def notify(cls, event):
for handler in cls.subscribers.get(type(event), []):
handler.handle(event)
There isn't much to tell here. It contains a dictionary of event types into a list of registered handlers, and then a means of registering handlers to event types and for notifying handlers about an event.
The cls.subscribers[event_type].append(handler())
part may look a little weird - particularly the .append(handler())
part. The reason for it is simply so I can call the subscribe
method with the class name, and then subscribe
will call the constructor and store an instance of the handler. Not the only way to do this, and maybe not the best way, but it was ok for me, since all handlers are expected to be dependency free with empty constructors and a single handle
method:
class BaseEventHandler:
def handle(self, event):
pass
Alright, we're getting there. Now we just need to define what events look like. For my purposes, it is useful to not constrain events too much, since events can be anything. One constraint I want though, is to make sure that I know which user triggered the event, no matter what event that is. So the base class for my events will look like this:
class BaseEvent(object):
def __init__(self, actor: UserProxy):
self.actor = actor
def __eq__(self, other):
# First check if other object is of the same type
if isinstance(other, self.__class__):
# Compare the __dict__ attribute dictionary which stores
# all the attributes of an object
return self.__dict__ == other.__dict__
return NotImplemented
(The __eq__
is just a helper so that I can compare different instances of an event by their members and consider them equal if they are made up of the same parts)
Alright, so far so good. Now we have all the parts, but how do we put them together? One thing that was missing from Django in the past was a place to "bootstrap" and app to make sure app-specific functionality would be handled before "starting" the app. At some point along the way (before Django==3.0 at some point) came the option to create "apps.py" files to define settings for each app. I think this is a good place to bootstrap the setup:
# myapp/apps.py
class MyAppConfig(AppConfig):
name = "myapp"
def ready(self):
self.subscribe_to_events()
def subscribe_to_events(self):
# All imports are kept local, because events will import models, which
# is too early for the module level import here.
from myapp.mediator import Mediator
from myapp.events.user_events import (
UserRoleAddedEvent, UserRoleRemovedEvent, UserRoleChangedEvent,
)
Mediator.subscribe(event_type=UserRoleAddedEvent, handler=UserRoleAddedHandler)
Mediator.subscribe(event_type=UserRoleRemovedEvent, handler=UserRoleRemovedHandler)
Mediator.subscribe(event_type=UserRoleChangedEvent, handler=UserRoleChangedHandler)
Not a lot going on here, but let's unpack it:
When the app is imported, the ready
function is called by the Django framework. The ready
function calls my subscribe_to_events
function, which subscribed all the necessary events to their handlers. Keep in mind that multiple handlers can be registered to each event, so you can strap in any kind of business logic here (logging, email notifications, reporting database updates, etc.).
Notice also, that we're making static calls to the Mediator
class. Recall that the purpose of the pattern is to reduce coupling in the codebase? Static calls to classes are definitely not the way to achieve this. I made a judgement call here though:
- My app will only ever have "one" event mediator.
- The implementation of the mediator is kept dead simple and can easily be replaced or extended with different requirements as needed.
- Django doesn't have a "Service Registry" like Dotnet applications, so I can't register the mediator as a service and call on it where I need it. It is therefor much simpler to just import it and make static calls
So basically, if it becomes a problem, we'll see what comes first: Either Django adds service registry functionality, or I'll find a way to implement that myself.
One part is still missing. We haven't seen how to fire events! This is the simplest part. Wherever I need to fire an event, I simply access Mediator.notify
directly:
Mediator.notify(UserRoleAddedEvent(
current_user,
*other_args
))
Why is this better than signals?
With signals, Django owns the abstraction and controls how they work. You can subscribe to built in signals or write your own, but you are not in control of how they operate. As soon as your requirements diverge from Django's design, you will end up fighting the framework. This is first reason why I think this is a good approach. Own your abstractions!
Next, this solution is highly testable! You can write unittest that assert that when you notify an event, the proper handler picks it up and processes it correctly. This can be tested in a vacuum without caring how/when the event would be notified in the actual application. Then, in other tests, you can assert that your app notified the event when it is supposed to (by mocking Mediator.notify
for instance) and that it provides the correct arguments for the event constructor. In this way, you test both sides of the mediator. Your tests are decoupled from one another and kept very simple, but they are of course statically bound to the Mediator
class, as mentioned above.
Example of testing the handler, making sure that it creates an EventLogEntry
:
class TestUserRoleAddedHandler:
def test_handles_event(self, current_user):
# Arrange
[...]
# Act
Mediator.notify(UserRoleAddedEvent(current_user, *other_args))
# Assert
assert EventLogEntry.objects.count() == 1
[...]
Example of a test that ensures that the event is fired where it is supposed to
class TestAddingUserTriggersMediatorNotify:
def test_add_group_triggers_user_role_added_event(self, current_user, mocker):
# Arrange
[...]
# Act
notify = mocker.patch.object(Mediator, "notify")
MemberSvc().add_member(someone)
# Assert
notify.assert_called_with(UserRoleAddedEvent(
current_iser,
someone,
*other_args
))
[...]