Python Antipatterns That Defeated Me
A comprehensive catalog of Python antipatterns encountered in real-world projects, covering type hinting mistakes, code complexity, architectural overengineering, error handling failures, and poor naming — with practical code examples and fixes for each.
I spent a year working on a complex Python project that taught me more through its mistakes than through its successes. This article is a catalog of antipatterns I encountered — drawn from that project and from Django's source code — with practical fixes for each one.
Type Hinting Issues
Missing and Improper Type Hints
Type hints are one of Python's most powerful tools for code clarity. Yet many codebases either skip them entirely or use them incorrectly.
Consider Django's template loader finder:
def find_template_loader(self, loader):
if isinstance(loader, (tuple, list)):
loader, *args = loader
else:
args = []
if isinstance(loader, str):
loader_class = import_string(loader)
return loader_class(self, *args)
else:
raise ImproperlyConfigured(
"Invalid value in template loaders configuration: %r" % loader
)Without type hints, you need to read the entire function body to understand what loader can be. Python 3.12 introduced several powerful typing features that help:
Type aliases make complex types readable:
type Point = tuple[float, float]NewType creates distinct types from existing ones:
from typing import NewType
Byte = NewType('Byte', int)Generic types enable flexible, reusable type definitions:
type ListOrTuple[T] = tuple[T, ...] | list[T]
# Usage
list_of_numbers: ListOrTuple[int] = [1, 2, 3]
tuple_of_strings: ListOrTuple[str] = "a", "b", "c",ParamSpec preserves function signatures through decorators:
from typing import Callable
def logger[**P, R](func: Callable[P, R]) -> Callable[P, R]:
def wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
print("Calling function...")
return func(*args, **kwargs)
return wrapperConcatenate adds parameters to function signatures:
from typing import Callable, Concatenate, ParamSpec, TypeVar
P = ParamSpec("P")
R = TypeVar("R")
def add_logging[P, R](func: Callable[P, R]) -> Callable[Concatenate[str, P], R]:
def wrapper(log_msg: str, *args: P.args, **kwargs: P.kwargs) -> R:
print(f"Log: {log_msg}")
return func(*args, **kwargs)
return wrapperUse timedelta Instead of Raw Integers for Time
A constant like this tells you nothing about the unit:
AWS_OPERATION_RETRY_DELAY_SEC: int = 9Is it seconds? Milliseconds? The suffix helps, but what happens when it's passed to a function that expects milliseconds? Use timedelta instead:
AWS_OPERATION_RETRY_DELAY = timedelta(seconds=9)
retry_datetime = now() + AWS_OPERATION_RETRY_DELAYThe same problem plagues database models. Instead of:
collect_interval_minutes = models.IntegerField()Use:
collect_interval = models.DurationField()The confusion is real across the ecosystem. The requests library takes timeout in seconds as a float:
requests.get('https://github.com/', timeout=0.1)But Playwright uses milliseconds:
context.set_default_timeout(5_000)With timedelta, there is no ambiguity.
Function Overloading Does Not Work in Python
Unlike Java or C++, Python does not support function overloading. The second definition simply replaces the first:
def foo(a: int) -> int:
return a * 2
def foo(a: str) -> int:
return foo(int(a))Only the second foo survives. Use typing.overload for type-checker hints, but the runtime implementation must handle all cases in one function body.
Dictionaries Instead of Typed Classes
Configuration dictionaries lack structure and validation:
class ShieldTestSettings(ShieldSettings):
model_config = {
'env_file': '.env.test',
'extra': 'ignore',
}A dataclass provides autocomplete, validation, and documentation:
@dataclass(slots=True)
class ModelConfig:
env_file: str
extra: Literal["ignore", "forbid"] = "ignore"
class ShieldTestSettings(ShieldSettings):
model_config = ModelConfig(
env_file='env.test',
extra='ignore',
)Similarly, untyped API responses should be validated with Pydantic:
response = session.get(API_URL)
data: dict = response.json()Becomes:
user_info = UserInfo.model_validate(data)Code Complexity
Variable Reassignment Obscures Intent
Consider this pattern where ret gets reassigned multiple times:
def _validate_manager_state(self) -> bool:
ret: bool = self._handle_shielded_server_change() # <-- ret appears here
# ...
ret = self._handle_hosted_zone_change() or ret # <-- and replaced here
# ...
return ret # <-- what's returned in the end?Using descriptive variable names makes intent clear:
def _validate_manager_state(self) -> bool:
shielded_server_changed = self._handle_shielded_server_change()
# ...
hosted_zone_changed = self._handle_hosted_zone_change()
# ...
return shielded_server_changed or hosted_zone_changedAn even worse pattern chains boolean reassignments:
def clean_all(self) -> None:
objects = self.state_manager.get_state().address_manager_created_objects
cleaned: bool = True
cleaned = self._clean_objects(objects, WAF, _remove_firewall) and cleaned
cleaned = self._clean_objects(objects, ELB, _remove_elb) and cleaned
cleaned = self._clean_objects(objects, SECURITY_GROUP, _remove_security_group) and cleaned
cleaned = self._clean_objects(objects, TARGET_GROUP, _remove_target_group) and cleaned
cleaned = self._clean_objects(objects, SUBNET, _remove_subnet) and cleaned
cleaned = self._clean_objects(objects, VPC, _remove_vpc) and cleaned
if not cleaned:
raise AddressManagerExceptionA list comprehension is more Pythonic and readable:
def clean_all(self) -> None:
objects = self.state_manager.get_state().address_manager_created_objects
cleaned = [
self._clean_objects(objects, WAF, _remove_firewall),
self._clean_objects(objects, ELB, _remove_elb),
self._clean_objects(objects, SECURITY_GROUP, _remove_security_group),
self._clean_objects(objects, TARGET_GROUP, _remove_target_group),
self._clean_objects(objects, SUBNET, _remove_subnet),
self._clean_objects(objects, VPC, _remove_vpc),
]
if not all(cleaned):
raise AddressManagerExceptionEven Django has this problem with variable reassignment in its template loading code:
if loaders is None:
loaders = ["django.template.loaders.filesystem.Loader"]
if app_dirs:
loaders += ["django.template.loaders.app_directories.Loader"]
loaders = [("django.template.loaders.cached.Loader", loaders)]Side Effects in Methods
Methods should either modify state OR return values — not both. Django's ChangeList.get_queryset violates this:
def get_queryset(self, request, exclude_parameters=None):
(
self.filter_specs, # <------- SIDE EFFECT!
self.has_filters, # <------- SIDE EFFECT!
remaining_lookup_params,
filters_may_have_duplicates,
self.has_active_filters, # <------- SIDE EFFECT!
) = self.get_filters(request)
# ...
self.clear_all_filters_qs = self.get_query_string( # <------- SIDE EFFECT!
new_params=remaining_lookup_params,
remove=self.get_filters_params(),
)
# ...
return qsA method called get_queryset should only return a queryset. Setting instance attributes is an unexpected side effect that makes subclassing and testing extremely difficult.
Too Many Constructor Parameters
When a class needs many dependencies, group related ones:
class MinerShield:
def __init__(
self,
miner_hotkey: Hotkey,
validators_manager: AbstractValidatorsManager,
address_manager: AbstractAddressManager,
manifest_manager: AbstractManifestManager,
blockchain_manager: AbstractBlockchainManager,
state_manager: AbstractMinerShieldStateManager,
event_processor: AbstractMinerShieldEventProcessor,
options: MinerShieldOptions):Group the managers into a single parameter object:
class ManagersOptions:
validators_manager: AbstractValidatorsManager
address_manager: AbstractAddressManager
manifest_manager: AbstractManifestManager
blockchain_manager: AbstractBlockchainManager
state_manager: AbstractMinerShieldStateManager
class MinerShield:
def __init__(
self,
miner_hotkey: Hotkey,
managers: ManagersOptions,
event_processor: AbstractMinerShieldEventProcessor,
options: MinerShieldOptions,
):Use dataclasses to eliminate boilerplate __init__ methods:
@dataclass
class ShieldClient:
netuid: int
wallet: bittensor_wallet.Wallet,
options: ShieldMetagraphOptions = field(default_factory=ShieldMetagraphOptions)
manifest_manager: ReadOnlyManifestManager | None = None
event_processor: AbstractMinerShieldEventProcessor
blockchain_manager: AbstractBlockchainManager
encryption_manager: AbstractEncryptionManager | None = None
certificate_manager: EDDSACertificateManager | None = None
def __post_init__(self):
self.manifest_manager = self.manifest_manager or self.create_default_manifest_manager(
self.event_processor,
self.encryption_manager,
)
self.certificate_manager = certificate_manager or self.create_default_certificate_manager()
self.encryption_manager = encryption_manager or self.create_default_encryption_manager()Architecture Antipatterns
Abstract Interfaces for Single Implementations (YAGNI Violation)
Creating an abstract base class when only one implementation exists adds unnecessary indirection. The YAGNI principle ("You Aren't Gonna Need It") suggests building simple solutions first:
class AbstractEncryptionManager(Generic[PrivateKeyType, PublicKeyType], ABC):
@abstractmethod
def encrypt(self, public_key: PublicKeyType, data: bytes) -> bytes:
pass
@abstractmethod
def decrypt(self, private_key: PrivateKeyType, data: bytes) -> bytes:
pass
class ECIESEncryptionManager(AbstractEncryptionManager[PrivateKey, PublicKey]):
_CURVE: Literal['ed25519'] = 'ed25519'
_ECIES_CONFIG = Config(elliptic_curve=_CURVE)
def encrypt(self, public_key: PublicKey, data: bytes) -> bytes:
try:
return ecies.encrypt(public_key, data, config=self._ECIES_CONFIG)
except Exception as e:
raise EncryptionError(f'Encryption failed: {e}') from e
def decrypt(self, private_key: PrivateKey, data: bytes) -> bytes:
try:
return ecies.decrypt(private_key, data, config=self._ECIES_CONFIG)
except Exception as e:
raise DecryptionError(f'Decryption failed: {e}') from eWith only one implementation, just use the class directly:
class EncryptionManager:
_CURVE: Literal['ed25519'] = 'ed25519'
_ECIES_CONFIG = Config(elliptic_curve=_CURVE)
def encrypt(self, public_key: PublicKey, data: bytes) -> bytes:
try:
return ecies.encrypt(public_key, data, config=self._ECIES_CONFIG)
except Exception as e:
raise EncryptionError(f'Encryption failed: {e}') from e
def decrypt(self, private_key: PrivateKey, data: bytes) -> bytes:
try:
return ecies.decrypt(private_key, data, config=self._ECIES_CONFIG)
except Exception as e:
raise DecryptionError(f'Decryption failed: {e}') from eReplacing Functions with Unnecessary Classes
When a class exists only to wrap a single function, use a function instead:
class AbstractMinerShieldEventProcessor(ABC):
def event(self, template: str, exception: Exception | None = None, **kwargs):
return self._add_event(MinerShieldEvent(template, exception, **kwargs))
@abstractmethod
def _add_event(self, event: MinerShieldEvent):
pass
class PrintingMinerShieldEventProcessor(AbstractMinerShieldEventProcessor):
def _add_event(self, event: MinerShieldEvent):
print(f'{event.date}: MinerShieldEvent: {event.description}\nMetadata: {event.metadata}')
if event.exception is not None:
print('Exception happened:')
traceback.print_exception(event.exception, file=stdout)
manager = ReadOnlyManifestManager(event_processor=PrintingMinerShieldEventProcessor())A simple function does the same job:
def print_event_to_console(template: str, exception: Exception | None = None, **kwargs) -> None:
event = MinerShieldEvent(template, exception, **kwargs)
print(f'{event.date}: MinerShieldEvent: {event.description}\nMetadata: {event.metadata}')
if event.exception is not None:
print('Exception happened:')
traceback.print_exception(event.exception, file=stdout)
manager = ReadOnlyManifestManager(event_processor=print_event_to_console)Excessive Manager Pattern and Task Classes
The project I worked on had managers nested within managers, each with its own abstract interface:
class MinerShield:
miner_hotkey: Hotkey
worker_thread: Optional[threading.Thread]
task_queue: Queue['AbstractMinerShieldTask']
validators_manager: AbstractValidatorsManager
address_manager: AbstractAddressManager
manifest_manager: AbstractManifestManager
blockchain_manager: AbstractBlockchainManager
state_manager: AbstractMinerShieldStateManager
event_processor: AbstractMinerShieldEventProcessor
options: MinerShieldOptionsAnd task classes that were just boilerplate wrappers around method calls:
class MinerShieldInitializeTask(AbstractMinerShieldTask):
def run(self, miner_shield: MinerShield):
miner_shield._handle_initialize()
class MinerShieldDisableTask(AbstractMinerShieldTask):
def run(self, miner_shield: MinerShield):
miner_shield._handle_disable()
class MinerShieldValidateStateTask(AbstractMinerShieldTask):
def run(self, miner_shield: MinerShield):
miner_shield._handle_validate_state()
class MinerShieldValidatorsChangedTask(AbstractMinerShieldTask):
def run(self, miner_shield: MinerShield):
miner_shield._handle_validators_change()
class MinerShieldBanValidatorTask(AbstractMinerShieldTask):
def __init__(self, validator_hotkey: Hotkey):
self.validator_hotkey = validator_hotkey
def run(self, miner_shield: MinerShield):
miner_shield._handle_ban_validator(self.validator_hotkey)
class MinerShieldUpdateManifestTask(AbstractMinerShieldTask):
def run(self, miner_shield: MinerShield):
miner_shield._handle_update_manifest()
class MinerShieldPublishManifestTask(AbstractMinerShieldTask):
def run(self, miner_shield: MinerShield):
miner_shield._handle_publish_manifest()Each task class simply delegates to a private method. This is Java-style thinking forced onto Python.
Single-Line Wrapper Methods
Alias methods that add zero value:
class AwsShieldedServerData(BaseModel):
def to_json(self) -> str:
return self.model_dump_json()
@staticmethod
def from_json(json_str: str) -> AwsShieldedServerData:
return AwsShieldedServerData.model_validate_json(json_str)These wrappers around Pydantic's built-in methods just add another layer of indirection without any benefit. Call model_dump_json() and model_validate_json() directly.
Factory Classes
When the factory is just a collection of @classmethod calls, it's overengineered. Consider simplifying the construction or using a builder pattern if the complexity truly warrants it.
Inadequate Separation of Concerns
The enable() method hid its true behavior behind task indirection:
def enable(self):
if self.worker_thread is not None:
return
self.finishing = False
self.run = True
self._add_task(MinerShieldInitializeTask())
self.worker_thread = threading.Thread(target=self._worker_function)
self.worker_thread.start()Compare with a clearer version where the steps are explicit:
class MinerShield:
def enable(self):
if self.is_running:
return
self.add_task(self.fetch_validators)
self.add_task(self.create_addresses)
self.add_task(self.save_manifest)
self.add_task(self.publish_manifest)
self.add_task(self.close_public_address)
self.is_running = True
self.worker_thread = threading.Thread(target=self._worker_function)
self.worker_thread.start()Error Handling
Returning Boolean Success Flags
Some methods returned True/False to indicate success, while others always returned True or raised exceptions. This inconsistency is dangerous:
def remove_target_group(self, target_group_id: str) -> bool:
error_code: str = ''
for _ in range(self.AWS_OPERATION_MAX_RETRIES):
try:
self.elb_client.delete_target_group(TargetGroupArn=target_group_id)
break
except ClientError as e:
error_code = e.response['Error']['Code']
if error_code == 'ResourceInUse':
time.sleep(self.AWS_OPERATION_RETRY_DELAY_SEC)
else:
raise e
else:
self.event_processor.event(
'Failed to remove AWS TargetGroup {id}, error={error_code}', id=target_group_id, error_code=error_code
)
return False
return TrueCompare with these methods that always return True:
def _remove_subnet(self, subnet_id: str) -> bool:
self.ec2_client.delete_subnet(SubnetId=subnet_id)
return True
def _remove_vpc(self, vpc_id: str) -> bool:
self.ec2_client.delete_vpc(VpcId=vpc_id)
return TrueIf a method always returns True, the boolean return type is meaningless. Instead, raise exceptions on failure.
A retry decorator extracts the retry logic cleanly:
from functools import wraps
def retry(fn: Callable, n_times: int, wait_for: timedelta, exception_class: Type[Exception]):
@wraps(fn)
def wrapper(*args, **kwargs):
for _ in range(n_times):
try:
return fn(*args, **kwargs)
except exception_class as exc:
last_exc = exc
time.sleep(wait_for.total_seconds())
raise last_exc
return wrapperSilencing Errors
Catching exceptions and returning None hides bugs:
def get_address(self, hotkey: Hotkey) -> Optional[Address]:
serialized_address: Optional[bytes] = self.get(hotkey)
if serialized_address is None:
return None
try:
return self.address_serializer.deserialize(serialized_address)
except AddressDeserializationException:
return NoneThe caller cannot distinguish "no address exists" from "the address data is corrupted." Let the exception propagate, or at least log it.
Similarly, swallowing parse errors masks data problems:
def get_num_subscribers(text):
try:
if 'k' in text:
return int(float(text.replace('k', '')) * 1_000)
if 'm' in text:
return int(float(text.replace('m', '')) * 1_000_000)
if 'b' in text:
return int(float(text.replace('b', '')) * 1_000_000_000)
return int(text)
except ValueError:
logger.exception(f"Parsing error")This function silently returns None when parsing fails. The caller likely doesn't check for None, leading to subtle bugs downstream.
Defensive Assertions
Hidden assumptions should be made explicit. Instead of:
return hosted_zone.name[:-1] # Cut '.' from the end of hosted zone nameValidate the assumption:
assert hosted_zone.name.endswith(".")
return hosted_zone.name[:-1]
# or simply:
return hosted_zone.name.removesuffix(".")The more_itertools library provides one() which asserts a collection has exactly one element:
# Instead of:
assert len(created_objects[AwsObjectTypes.ELB.value]) == 1
return self._get_elb_info(next(iter(created_objects[AwsObjectTypes.ELB.value])))
# Use:
return self._get_elb_info(one(created_objects[AwsObjectTypes.ELB.value]))And Pydantic validates the shape of external data:
from pydantic import BaseModel
class Person(BaseModel):
first_name: str
last_name: str
age: int
external_data = requests.get("http://localhost/api/person/1", timeout=5).json()
person = Person.model_validate(external_data)Conclusion
Many of these antipatterns stem from trying to force Java patterns onto Python. Python's dynamic nature and first-class functions mean that many patterns common in Java — abstract interfaces, factory classes, task objects — are often unnecessary overhead. Write simple, explicit code. Use type hints. Raise exceptions instead of returning status codes. And always remember YAGNI: build what you need today, not what you might need tomorrow.