Working through the single responsibility principle (SRP) in Python when calls are expensive
Some base points: Python method calls are "expensive" due to its interpreted nature. In theory, if your code is simple enough, breaking down Python code has negative impact besides readability and reuse (which is a big gain for developers, not so much for users). The single responsibility principle (SRP) keeps code readable, is easier to test and maintain. The project has a special kind of background where we want readable code, tests, and time performance. For instance, code like this which invokes several methods (x4) is slower than the following one which is just one. from operator import add class Vector: def __init__(self,list_of_3): self.coordinates = list_of_3 def move(self,movement): self.coordinates = list( map(add, self.coordinates, movement)) return self.coordinates def revert(self): self.coordinates = self.coordinates[::-1] return self.coordinates def get_coordinates(self): return self.coordinates ## Operation with one vector vec3 = Vector([1,2,3]) vec3.move([1,1,1]) vec3.revert() vec3.get_coordinates() In comparison to this: from operator import add def move_and_revert_and_return(vector,movement): return list( map(add, vector, movement) )[::-1] move_and_revert_and_return([1,2,3],[1,1,1]) If I am to parallelize something such as that, it is pretty objective I lose performance. Mind that is just an example; my project has several mini routines with math such as that - While it is much easier to work with, our profilers are disliking it. How and where do we embrace the SRP without compromising performance in Python, as its inherent implementation directly impacts it? Are there workarounds, like some sort of pre-processor that puts things in-line for release? Or is Python simply poor at handling code breakdown altogether?
Some base points:
- Python method calls are "expensive" due to its interpreted nature. In theory, if your code is simple enough, breaking down Python code has negative impact besides readability and reuse (which is a big gain for developers, not so much for users).
- The single responsibility principle (SRP) keeps code readable, is easier to test and maintain.
- The project has a special kind of background where we want readable code, tests, and time performance.
For instance, code like this which invokes several methods (x4) is slower than the following one which is just one.
from operator import add
class Vector:
def __init__(self,list_of_3):
self.coordinates = list_of_3
def move(self,movement):
self.coordinates = list( map(add, self.coordinates, movement))
return self.coordinates
def revert(self):
self.coordinates = self.coordinates[::-1]
return self.coordinates
def get_coordinates(self):
return self.coordinates
## Operation with one vector
vec3 = Vector([1,2,3])
vec3.move([1,1,1])
vec3.revert()
vec3.get_coordinates()
In comparison to this:
from operator import add
def move_and_revert_and_return(vector,movement):
return list( map(add, vector, movement) )[::-1]
move_and_revert_and_return([1,2,3],[1,1,1])
If I am to parallelize something such as that, it is pretty objective I lose performance. Mind that is just an example; my project has several mini routines with math such as that - While it is much easier to work with, our profilers are disliking it.
How and where do we embrace the SRP without compromising performance in Python, as its inherent implementation directly impacts it?
Are there workarounds, like some sort of pre-processor that puts things in-line for release?
Or is Python simply poor at handling code breakdown altogether?