Working through the single responsibility principle (SRP) in Python when calls are expensive The 2019 Stack Overflow Developer Survey Results Are In Unicorn Meta Zoo #1: Why another podcast? Announcing the arrival of Valued Associate #679: Cesar Manara 2019 Community Moderator Election Results Can we improve our style of self-moderation for not-too-bad questions?Is micro-optimisation important when coding?Is SRP (Single Responsibility Principle) objective?Single Responsibility Principle ImplementationSingle Responsibility Principle: Responsibility unknownObject oriented vs vector based programmingEnums and single responsibility principle (SRP)When using the Single Responsibility Principle, what constitutes a “responsibility?”Single Responsibility Principle Violation?Understanding Single Responsibility Pattern (SRP)Confusion on Single Responsibility Principle (SRP) with modem example?Problem understanding the Single Responsibility Principle

Can we generate random numbers using irrational numbers like π and e?

My body leaves; my core can stay

What does Linus Torvalds mean when he says that Git "never ever" tracks a file?

What can I do to 'burn' a journal?

How to politely respond to generic emails requesting a PhD/job in my lab? Without wasting too much time

Nested ellipses in tikzpicture: Chomsky hierarchy

Drawing vertical/oblique lines in Metrical tree (tikz-qtree, tipa)

What can I do if neighbor is blocking my solar panels intentionally?

Windows 10: How to Lock (not sleep) laptop on lid close?

How to make Illustrator type tool selection automatically adapt with text length

Didn't get enough time to take a Coding Test - what to do now?

Define a list range inside a list

Do warforged have souls?

Using dividends to reduce short term capital gains?

Is there a way to generate uniformly distributed points on a sphere from a fixed amount of random real numbers per point?

Intergalactic human space ship encounters another ship, character gets shunted off beyond known universe, reality starts collapsing

Why not take a picture of a closer black hole?

Why are PDP-7-style microprogrammed instructions out of vogue?

How to handle characters who are more educated than the author?

For what reasons would an animal species NOT cross a *horizontal* land bridge?

Does Parliament hold absolute power in the UK?

What was the last x86 CPU that did not have the x87 floating-point unit built in?

What happens to a Warlock's expended Spell Slots when they gain a Level?

Why did Peik Lin say, "I'm not an animal"?



Working through the single responsibility principle (SRP) in Python when calls are expensive



The 2019 Stack Overflow Developer Survey Results Are In
Unicorn Meta Zoo #1: Why another podcast?
Announcing the arrival of Valued Associate #679: Cesar Manara
2019 Community Moderator Election Results
Can we improve our style of self-moderation for not-too-bad questions?Is micro-optimisation important when coding?Is SRP (Single Responsibility Principle) objective?Single Responsibility Principle ImplementationSingle Responsibility Principle: Responsibility unknownObject oriented vs vector based programmingEnums and single responsibility principle (SRP)When using the Single Responsibility Principle, what constitutes a “responsibility?”Single Responsibility Principle Violation?Understanding Single Responsibility Pattern (SRP)Confusion on Single Responsibility Principle (SRP) with modem example?Problem understanding the Single Responsibility Principle



.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








2















Some base points:




  • Python method calls are "expensive" due to its interpreted nature. In theory, if your code is simple enough, breaking down Python code has negative impact besides readability and reuse (which is a big gain for developers, not so much for users).

  • The single responsibility principle (SRP) keeps code readable, is easier to test and maintain.

  • The project has a special kind of background where we want readable code, tests, and time performance.

For instance, code like this which invokes several methods (x4) is slower than the following one which is just one.



from operator import add

class Vector:
def __init__(self,list_of_3):
self.coordinates = list_of_3

def move(self,movement):
self.coordinates = list( map(add, self.coordinates, movement))
return self.coordinates

def revert(self):
self.coordinates = self.coordinates[::-1]
return self.coordinates

def get_coordinates(self):
return self.coordinates

## Operation with one vector
vec3 = Vector([1,2,3])
vec3.move([1,1,1])
vec3.revert()
vec3.get_coordinates()


In comparison to this:



from operator import add

def move_and_revert_and_return(vector,movement):
return list( map(add, vector, movement) )[::-1]

move_and_revert_and_return([1,2,3],[1,1,1])


If I am to parallelize something such as that, it is pretty objective I lose performance. Mind that is just an example; my project has several mini routines with math such as that - While it is much easier to work with, our profilers are disliking it.




How and where do we embrace the SRP without compromising performance in Python, as its inherent implementation directly impacts it?



Are there workarounds, like some sort of pre-processor that puts things in-line for release?



Or is Python simply poor at handling code breakdown altogether?










share|improve this question



















  • 2





    Possible duplicate of Is micro-optimisation important when coding?

    – gnat
    8 hours ago






  • 10





    For what it's worth, your two code examples do not differ in number of responsibilities. The SRP is not a method counting exercise.

    – Robert Harvey
    6 hours ago











  • @RobertHarvey You're right, sorry for the poor example and I'll edit a better one when I have the time. In either case, readability and maintanability suffers and eventually the SRP breaks down within the codebase as we cut down on classes and their methods.

    – lucasgcb
    6 hours ago












  • note that function calls are expensive in any language, though AOT compilers have the luxury of inlining

    – Eevee
    3 hours ago











  • Does your object really have four responsibilities: move, revert, get_coordinates, and move_and_revert_and_return? or does it really only have the one responsibility, move_and_revert_and_return?

    – Eric Towers
    2 hours ago

















2















Some base points:




  • Python method calls are "expensive" due to its interpreted nature. In theory, if your code is simple enough, breaking down Python code has negative impact besides readability and reuse (which is a big gain for developers, not so much for users).

  • The single responsibility principle (SRP) keeps code readable, is easier to test and maintain.

  • The project has a special kind of background where we want readable code, tests, and time performance.

For instance, code like this which invokes several methods (x4) is slower than the following one which is just one.



from operator import add

class Vector:
def __init__(self,list_of_3):
self.coordinates = list_of_3

def move(self,movement):
self.coordinates = list( map(add, self.coordinates, movement))
return self.coordinates

def revert(self):
self.coordinates = self.coordinates[::-1]
return self.coordinates

def get_coordinates(self):
return self.coordinates

## Operation with one vector
vec3 = Vector([1,2,3])
vec3.move([1,1,1])
vec3.revert()
vec3.get_coordinates()


In comparison to this:



from operator import add

def move_and_revert_and_return(vector,movement):
return list( map(add, vector, movement) )[::-1]

move_and_revert_and_return([1,2,3],[1,1,1])


If I am to parallelize something such as that, it is pretty objective I lose performance. Mind that is just an example; my project has several mini routines with math such as that - While it is much easier to work with, our profilers are disliking it.




How and where do we embrace the SRP without compromising performance in Python, as its inherent implementation directly impacts it?



Are there workarounds, like some sort of pre-processor that puts things in-line for release?



Or is Python simply poor at handling code breakdown altogether?










share|improve this question



















  • 2





    Possible duplicate of Is micro-optimisation important when coding?

    – gnat
    8 hours ago






  • 10





    For what it's worth, your two code examples do not differ in number of responsibilities. The SRP is not a method counting exercise.

    – Robert Harvey
    6 hours ago











  • @RobertHarvey You're right, sorry for the poor example and I'll edit a better one when I have the time. In either case, readability and maintanability suffers and eventually the SRP breaks down within the codebase as we cut down on classes and their methods.

    – lucasgcb
    6 hours ago












  • note that function calls are expensive in any language, though AOT compilers have the luxury of inlining

    – Eevee
    3 hours ago











  • Does your object really have four responsibilities: move, revert, get_coordinates, and move_and_revert_and_return? or does it really only have the one responsibility, move_and_revert_and_return?

    – Eric Towers
    2 hours ago













2












2








2








Some base points:




  • Python method calls are "expensive" due to its interpreted nature. In theory, if your code is simple enough, breaking down Python code has negative impact besides readability and reuse (which is a big gain for developers, not so much for users).

  • The single responsibility principle (SRP) keeps code readable, is easier to test and maintain.

  • The project has a special kind of background where we want readable code, tests, and time performance.

For instance, code like this which invokes several methods (x4) is slower than the following one which is just one.



from operator import add

class Vector:
def __init__(self,list_of_3):
self.coordinates = list_of_3

def move(self,movement):
self.coordinates = list( map(add, self.coordinates, movement))
return self.coordinates

def revert(self):
self.coordinates = self.coordinates[::-1]
return self.coordinates

def get_coordinates(self):
return self.coordinates

## Operation with one vector
vec3 = Vector([1,2,3])
vec3.move([1,1,1])
vec3.revert()
vec3.get_coordinates()


In comparison to this:



from operator import add

def move_and_revert_and_return(vector,movement):
return list( map(add, vector, movement) )[::-1]

move_and_revert_and_return([1,2,3],[1,1,1])


If I am to parallelize something such as that, it is pretty objective I lose performance. Mind that is just an example; my project has several mini routines with math such as that - While it is much easier to work with, our profilers are disliking it.




How and where do we embrace the SRP without compromising performance in Python, as its inherent implementation directly impacts it?



Are there workarounds, like some sort of pre-processor that puts things in-line for release?



Or is Python simply poor at handling code breakdown altogether?










share|improve this question
















Some base points:




  • Python method calls are "expensive" due to its interpreted nature. In theory, if your code is simple enough, breaking down Python code has negative impact besides readability and reuse (which is a big gain for developers, not so much for users).

  • The single responsibility principle (SRP) keeps code readable, is easier to test and maintain.

  • The project has a special kind of background where we want readable code, tests, and time performance.

For instance, code like this which invokes several methods (x4) is slower than the following one which is just one.



from operator import add

class Vector:
def __init__(self,list_of_3):
self.coordinates = list_of_3

def move(self,movement):
self.coordinates = list( map(add, self.coordinates, movement))
return self.coordinates

def revert(self):
self.coordinates = self.coordinates[::-1]
return self.coordinates

def get_coordinates(self):
return self.coordinates

## Operation with one vector
vec3 = Vector([1,2,3])
vec3.move([1,1,1])
vec3.revert()
vec3.get_coordinates()


In comparison to this:



from operator import add

def move_and_revert_and_return(vector,movement):
return list( map(add, vector, movement) )[::-1]

move_and_revert_and_return([1,2,3],[1,1,1])


If I am to parallelize something such as that, it is pretty objective I lose performance. Mind that is just an example; my project has several mini routines with math such as that - While it is much easier to work with, our profilers are disliking it.




How and where do we embrace the SRP without compromising performance in Python, as its inherent implementation directly impacts it?



Are there workarounds, like some sort of pre-processor that puts things in-line for release?



Or is Python simply poor at handling code breakdown altogether?







python performance single-responsibility methods






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited 2 hours ago









Peter Mortensen

1,11521114




1,11521114










asked 9 hours ago









lucasgcblucasgcb

13116




13116







  • 2





    Possible duplicate of Is micro-optimisation important when coding?

    – gnat
    8 hours ago






  • 10





    For what it's worth, your two code examples do not differ in number of responsibilities. The SRP is not a method counting exercise.

    – Robert Harvey
    6 hours ago











  • @RobertHarvey You're right, sorry for the poor example and I'll edit a better one when I have the time. In either case, readability and maintanability suffers and eventually the SRP breaks down within the codebase as we cut down on classes and their methods.

    – lucasgcb
    6 hours ago












  • note that function calls are expensive in any language, though AOT compilers have the luxury of inlining

    – Eevee
    3 hours ago











  • Does your object really have four responsibilities: move, revert, get_coordinates, and move_and_revert_and_return? or does it really only have the one responsibility, move_and_revert_and_return?

    – Eric Towers
    2 hours ago












  • 2





    Possible duplicate of Is micro-optimisation important when coding?

    – gnat
    8 hours ago






  • 10





    For what it's worth, your two code examples do not differ in number of responsibilities. The SRP is not a method counting exercise.

    – Robert Harvey
    6 hours ago











  • @RobertHarvey You're right, sorry for the poor example and I'll edit a better one when I have the time. In either case, readability and maintanability suffers and eventually the SRP breaks down within the codebase as we cut down on classes and their methods.

    – lucasgcb
    6 hours ago












  • note that function calls are expensive in any language, though AOT compilers have the luxury of inlining

    – Eevee
    3 hours ago











  • Does your object really have four responsibilities: move, revert, get_coordinates, and move_and_revert_and_return? or does it really only have the one responsibility, move_and_revert_and_return?

    – Eric Towers
    2 hours ago







2




2





Possible duplicate of Is micro-optimisation important when coding?

– gnat
8 hours ago





Possible duplicate of Is micro-optimisation important when coding?

– gnat
8 hours ago




10




10





For what it's worth, your two code examples do not differ in number of responsibilities. The SRP is not a method counting exercise.

– Robert Harvey
6 hours ago





For what it's worth, your two code examples do not differ in number of responsibilities. The SRP is not a method counting exercise.

– Robert Harvey
6 hours ago













@RobertHarvey You're right, sorry for the poor example and I'll edit a better one when I have the time. In either case, readability and maintanability suffers and eventually the SRP breaks down within the codebase as we cut down on classes and their methods.

– lucasgcb
6 hours ago






@RobertHarvey You're right, sorry for the poor example and I'll edit a better one when I have the time. In either case, readability and maintanability suffers and eventually the SRP breaks down within the codebase as we cut down on classes and their methods.

– lucasgcb
6 hours ago














note that function calls are expensive in any language, though AOT compilers have the luxury of inlining

– Eevee
3 hours ago





note that function calls are expensive in any language, though AOT compilers have the luxury of inlining

– Eevee
3 hours ago













Does your object really have four responsibilities: move, revert, get_coordinates, and move_and_revert_and_return? or does it really only have the one responsibility, move_and_revert_and_return?

– Eric Towers
2 hours ago





Does your object really have four responsibilities: move, revert, get_coordinates, and move_and_revert_and_return? or does it really only have the one responsibility, move_and_revert_and_return?

– Eric Towers
2 hours ago










2 Answers
2






active

oldest

votes


















3















is Python simply poor at handling code breakdown altogether?




Unfortunately yes, Python is slow and there are many anecdotes about people drastically increasing performance by inlining functions and making their code ugly.



There is a work around, Cython, which is a compiled version of Python and much faster.






share|improve this answer

























  • While Robert's Answer helps cover some bases for potential misunderstandings behind doing this sort of optimization (which fits this question ), I feel this answers the situation a bit more directly and in-line with the Python context.

    – lucasgcb
    6 hours ago







  • 1





    sorry its somewhat short. I don't have time to write more. But I do think Robert is wrong on this one. The best advice with python seems to be to profile as you code. Dont assume it will be performant and only optimise if you find a problem

    – Ewan
    6 hours ago







  • 1





    @Ewan: You don't have to write the entire program first to follow my advice. A method or two is more than sufficient to get adequate profiling.

    – Robert Harvey
    4 hours ago







  • 1





    you can also try pypy, which is a JITted python

    – Eevee
    3 hours ago






  • 1





    @Ewan If you're really worried about the performance overhead of function calls, whatever you're doing is probably not suited for python. But then I really can't think of many examples there. The vast majority of business code is IO limited and the CPU heavy stuff is usually handled by calling out to native libraries (numpy, tensorflow and so on).

    – Voo
    1 hour ago


















22














Many potential performance concerns are not really a problem in practice. The issue you raise may be one of them. In the vernacular, we call worrying about those problems without proof that they are actual problems premature optimization.



If you are writing a front-end for a web service, your performance is not going to be significantly affected by function calls, because the cost of sending data over a network far exceeds the time it takes to make a method call.



If you are writing a tight loop that refreshes a video screen sixty times a second, then it might matter. But at that point, I claim you have larger problems if you're trying to use Python to do that, a job for which Python is probably not well-suited.



As always, the way you find out is to measure. Run a performance profiler or some timers over your code. See if it's a real problem in practice.




The Single Responsibility Principle is not a law or mandate; it is a guideline or principle. Software design is always about trade-offs; there are no absolutes. It is not uncommon to trade off readability and/or maintainability for speed, so you may have to sacrifice SRP on the altar of performance. But don't make that tradeoff unless you know you have a performance problem.






share|improve this answer




















  • 2





    I think this was true, until we invented cloud computing. Now one of the two functions effectively costs 4 times as much as the other

    – Ewan
    8 hours ago











  • @Ewan 4 times may not matter until you've measured it to be significant enough to care about. If Foo takes 1 ms and Bar takes 4 ms that's not good. Until you realize that transmitting the data across the network takes 200 ms. At that point, Bar being slower doesn't matter so much. (Just one possible example of where being X times slower doesn't make a noticeable or impactful difference, not meant to be necessarily super realistic.)

    – Becuzz
    7 hours ago






  • 5





    @Ewan If the reduction in the bill saves you $15/month but it will take a $125/hour contractor 4 hours to fix and test it, I could easily justify that not being worth a business's time to do (or at least not do right now if time to market is crucial, etc.). There are always tradeoffs. And what makes sense in one circumstance might not in another.

    – Becuzz
    6 hours ago






  • 2





    your AWS bills are very low indeed

    – Ewan
    6 hours ago






  • 3





    @Ewan AWS rounds to the ceiling by batches anyways (standard is 100ms). Which means this kind of optimization only saves you anything if it consistently avoids pushing you to the next chunk.

    – Delioth
    4 hours ago












Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "131"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsoftwareengineering.stackexchange.com%2fquestions%2f390266%2fworking-through-the-single-responsibility-principle-srp-in-python-when-calls-a%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









3















is Python simply poor at handling code breakdown altogether?




Unfortunately yes, Python is slow and there are many anecdotes about people drastically increasing performance by inlining functions and making their code ugly.



There is a work around, Cython, which is a compiled version of Python and much faster.






share|improve this answer

























  • While Robert's Answer helps cover some bases for potential misunderstandings behind doing this sort of optimization (which fits this question ), I feel this answers the situation a bit more directly and in-line with the Python context.

    – lucasgcb
    6 hours ago







  • 1





    sorry its somewhat short. I don't have time to write more. But I do think Robert is wrong on this one. The best advice with python seems to be to profile as you code. Dont assume it will be performant and only optimise if you find a problem

    – Ewan
    6 hours ago







  • 1





    @Ewan: You don't have to write the entire program first to follow my advice. A method or two is more than sufficient to get adequate profiling.

    – Robert Harvey
    4 hours ago







  • 1





    you can also try pypy, which is a JITted python

    – Eevee
    3 hours ago






  • 1





    @Ewan If you're really worried about the performance overhead of function calls, whatever you're doing is probably not suited for python. But then I really can't think of many examples there. The vast majority of business code is IO limited and the CPU heavy stuff is usually handled by calling out to native libraries (numpy, tensorflow and so on).

    – Voo
    1 hour ago















3















is Python simply poor at handling code breakdown altogether?




Unfortunately yes, Python is slow and there are many anecdotes about people drastically increasing performance by inlining functions and making their code ugly.



There is a work around, Cython, which is a compiled version of Python and much faster.






share|improve this answer

























  • While Robert's Answer helps cover some bases for potential misunderstandings behind doing this sort of optimization (which fits this question ), I feel this answers the situation a bit more directly and in-line with the Python context.

    – lucasgcb
    6 hours ago







  • 1





    sorry its somewhat short. I don't have time to write more. But I do think Robert is wrong on this one. The best advice with python seems to be to profile as you code. Dont assume it will be performant and only optimise if you find a problem

    – Ewan
    6 hours ago







  • 1





    @Ewan: You don't have to write the entire program first to follow my advice. A method or two is more than sufficient to get adequate profiling.

    – Robert Harvey
    4 hours ago







  • 1





    you can also try pypy, which is a JITted python

    – Eevee
    3 hours ago






  • 1





    @Ewan If you're really worried about the performance overhead of function calls, whatever you're doing is probably not suited for python. But then I really can't think of many examples there. The vast majority of business code is IO limited and the CPU heavy stuff is usually handled by calling out to native libraries (numpy, tensorflow and so on).

    – Voo
    1 hour ago













3












3








3








is Python simply poor at handling code breakdown altogether?




Unfortunately yes, Python is slow and there are many anecdotes about people drastically increasing performance by inlining functions and making their code ugly.



There is a work around, Cython, which is a compiled version of Python and much faster.






share|improve this answer
















is Python simply poor at handling code breakdown altogether?




Unfortunately yes, Python is slow and there are many anecdotes about people drastically increasing performance by inlining functions and making their code ugly.



There is a work around, Cython, which is a compiled version of Python and much faster.







share|improve this answer














share|improve this answer



share|improve this answer








edited 5 hours ago









lucasgcb

13116




13116










answered 8 hours ago









EwanEwan

43.8k33698




43.8k33698












  • While Robert's Answer helps cover some bases for potential misunderstandings behind doing this sort of optimization (which fits this question ), I feel this answers the situation a bit more directly and in-line with the Python context.

    – lucasgcb
    6 hours ago







  • 1





    sorry its somewhat short. I don't have time to write more. But I do think Robert is wrong on this one. The best advice with python seems to be to profile as you code. Dont assume it will be performant and only optimise if you find a problem

    – Ewan
    6 hours ago







  • 1





    @Ewan: You don't have to write the entire program first to follow my advice. A method or two is more than sufficient to get adequate profiling.

    – Robert Harvey
    4 hours ago







  • 1





    you can also try pypy, which is a JITted python

    – Eevee
    3 hours ago






  • 1





    @Ewan If you're really worried about the performance overhead of function calls, whatever you're doing is probably not suited for python. But then I really can't think of many examples there. The vast majority of business code is IO limited and the CPU heavy stuff is usually handled by calling out to native libraries (numpy, tensorflow and so on).

    – Voo
    1 hour ago

















  • While Robert's Answer helps cover some bases for potential misunderstandings behind doing this sort of optimization (which fits this question ), I feel this answers the situation a bit more directly and in-line with the Python context.

    – lucasgcb
    6 hours ago







  • 1





    sorry its somewhat short. I don't have time to write more. But I do think Robert is wrong on this one. The best advice with python seems to be to profile as you code. Dont assume it will be performant and only optimise if you find a problem

    – Ewan
    6 hours ago







  • 1





    @Ewan: You don't have to write the entire program first to follow my advice. A method or two is more than sufficient to get adequate profiling.

    – Robert Harvey
    4 hours ago







  • 1





    you can also try pypy, which is a JITted python

    – Eevee
    3 hours ago






  • 1





    @Ewan If you're really worried about the performance overhead of function calls, whatever you're doing is probably not suited for python. But then I really can't think of many examples there. The vast majority of business code is IO limited and the CPU heavy stuff is usually handled by calling out to native libraries (numpy, tensorflow and so on).

    – Voo
    1 hour ago
















While Robert's Answer helps cover some bases for potential misunderstandings behind doing this sort of optimization (which fits this question ), I feel this answers the situation a bit more directly and in-line with the Python context.

– lucasgcb
6 hours ago






While Robert's Answer helps cover some bases for potential misunderstandings behind doing this sort of optimization (which fits this question ), I feel this answers the situation a bit more directly and in-line with the Python context.

– lucasgcb
6 hours ago





1




1





sorry its somewhat short. I don't have time to write more. But I do think Robert is wrong on this one. The best advice with python seems to be to profile as you code. Dont assume it will be performant and only optimise if you find a problem

– Ewan
6 hours ago






sorry its somewhat short. I don't have time to write more. But I do think Robert is wrong on this one. The best advice with python seems to be to profile as you code. Dont assume it will be performant and only optimise if you find a problem

– Ewan
6 hours ago





1




1





@Ewan: You don't have to write the entire program first to follow my advice. A method or two is more than sufficient to get adequate profiling.

– Robert Harvey
4 hours ago






@Ewan: You don't have to write the entire program first to follow my advice. A method or two is more than sufficient to get adequate profiling.

– Robert Harvey
4 hours ago





1




1





you can also try pypy, which is a JITted python

– Eevee
3 hours ago





you can also try pypy, which is a JITted python

– Eevee
3 hours ago




1




1





@Ewan If you're really worried about the performance overhead of function calls, whatever you're doing is probably not suited for python. But then I really can't think of many examples there. The vast majority of business code is IO limited and the CPU heavy stuff is usually handled by calling out to native libraries (numpy, tensorflow and so on).

– Voo
1 hour ago





@Ewan If you're really worried about the performance overhead of function calls, whatever you're doing is probably not suited for python. But then I really can't think of many examples there. The vast majority of business code is IO limited and the CPU heavy stuff is usually handled by calling out to native libraries (numpy, tensorflow and so on).

– Voo
1 hour ago













22














Many potential performance concerns are not really a problem in practice. The issue you raise may be one of them. In the vernacular, we call worrying about those problems without proof that they are actual problems premature optimization.



If you are writing a front-end for a web service, your performance is not going to be significantly affected by function calls, because the cost of sending data over a network far exceeds the time it takes to make a method call.



If you are writing a tight loop that refreshes a video screen sixty times a second, then it might matter. But at that point, I claim you have larger problems if you're trying to use Python to do that, a job for which Python is probably not well-suited.



As always, the way you find out is to measure. Run a performance profiler or some timers over your code. See if it's a real problem in practice.




The Single Responsibility Principle is not a law or mandate; it is a guideline or principle. Software design is always about trade-offs; there are no absolutes. It is not uncommon to trade off readability and/or maintainability for speed, so you may have to sacrifice SRP on the altar of performance. But don't make that tradeoff unless you know you have a performance problem.






share|improve this answer




















  • 2





    I think this was true, until we invented cloud computing. Now one of the two functions effectively costs 4 times as much as the other

    – Ewan
    8 hours ago











  • @Ewan 4 times may not matter until you've measured it to be significant enough to care about. If Foo takes 1 ms and Bar takes 4 ms that's not good. Until you realize that transmitting the data across the network takes 200 ms. At that point, Bar being slower doesn't matter so much. (Just one possible example of where being X times slower doesn't make a noticeable or impactful difference, not meant to be necessarily super realistic.)

    – Becuzz
    7 hours ago






  • 5





    @Ewan If the reduction in the bill saves you $15/month but it will take a $125/hour contractor 4 hours to fix and test it, I could easily justify that not being worth a business's time to do (or at least not do right now if time to market is crucial, etc.). There are always tradeoffs. And what makes sense in one circumstance might not in another.

    – Becuzz
    6 hours ago






  • 2





    your AWS bills are very low indeed

    – Ewan
    6 hours ago






  • 3





    @Ewan AWS rounds to the ceiling by batches anyways (standard is 100ms). Which means this kind of optimization only saves you anything if it consistently avoids pushing you to the next chunk.

    – Delioth
    4 hours ago
















22














Many potential performance concerns are not really a problem in practice. The issue you raise may be one of them. In the vernacular, we call worrying about those problems without proof that they are actual problems premature optimization.



If you are writing a front-end for a web service, your performance is not going to be significantly affected by function calls, because the cost of sending data over a network far exceeds the time it takes to make a method call.



If you are writing a tight loop that refreshes a video screen sixty times a second, then it might matter. But at that point, I claim you have larger problems if you're trying to use Python to do that, a job for which Python is probably not well-suited.



As always, the way you find out is to measure. Run a performance profiler or some timers over your code. See if it's a real problem in practice.




The Single Responsibility Principle is not a law or mandate; it is a guideline or principle. Software design is always about trade-offs; there are no absolutes. It is not uncommon to trade off readability and/or maintainability for speed, so you may have to sacrifice SRP on the altar of performance. But don't make that tradeoff unless you know you have a performance problem.






share|improve this answer




















  • 2





    I think this was true, until we invented cloud computing. Now one of the two functions effectively costs 4 times as much as the other

    – Ewan
    8 hours ago











  • @Ewan 4 times may not matter until you've measured it to be significant enough to care about. If Foo takes 1 ms and Bar takes 4 ms that's not good. Until you realize that transmitting the data across the network takes 200 ms. At that point, Bar being slower doesn't matter so much. (Just one possible example of where being X times slower doesn't make a noticeable or impactful difference, not meant to be necessarily super realistic.)

    – Becuzz
    7 hours ago






  • 5





    @Ewan If the reduction in the bill saves you $15/month but it will take a $125/hour contractor 4 hours to fix and test it, I could easily justify that not being worth a business's time to do (or at least not do right now if time to market is crucial, etc.). There are always tradeoffs. And what makes sense in one circumstance might not in another.

    – Becuzz
    6 hours ago






  • 2





    your AWS bills are very low indeed

    – Ewan
    6 hours ago






  • 3





    @Ewan AWS rounds to the ceiling by batches anyways (standard is 100ms). Which means this kind of optimization only saves you anything if it consistently avoids pushing you to the next chunk.

    – Delioth
    4 hours ago














22












22








22







Many potential performance concerns are not really a problem in practice. The issue you raise may be one of them. In the vernacular, we call worrying about those problems without proof that they are actual problems premature optimization.



If you are writing a front-end for a web service, your performance is not going to be significantly affected by function calls, because the cost of sending data over a network far exceeds the time it takes to make a method call.



If you are writing a tight loop that refreshes a video screen sixty times a second, then it might matter. But at that point, I claim you have larger problems if you're trying to use Python to do that, a job for which Python is probably not well-suited.



As always, the way you find out is to measure. Run a performance profiler or some timers over your code. See if it's a real problem in practice.




The Single Responsibility Principle is not a law or mandate; it is a guideline or principle. Software design is always about trade-offs; there are no absolutes. It is not uncommon to trade off readability and/or maintainability for speed, so you may have to sacrifice SRP on the altar of performance. But don't make that tradeoff unless you know you have a performance problem.






share|improve this answer















Many potential performance concerns are not really a problem in practice. The issue you raise may be one of them. In the vernacular, we call worrying about those problems without proof that they are actual problems premature optimization.



If you are writing a front-end for a web service, your performance is not going to be significantly affected by function calls, because the cost of sending data over a network far exceeds the time it takes to make a method call.



If you are writing a tight loop that refreshes a video screen sixty times a second, then it might matter. But at that point, I claim you have larger problems if you're trying to use Python to do that, a job for which Python is probably not well-suited.



As always, the way you find out is to measure. Run a performance profiler or some timers over your code. See if it's a real problem in practice.




The Single Responsibility Principle is not a law or mandate; it is a guideline or principle. Software design is always about trade-offs; there are no absolutes. It is not uncommon to trade off readability and/or maintainability for speed, so you may have to sacrifice SRP on the altar of performance. But don't make that tradeoff unless you know you have a performance problem.







share|improve this answer














share|improve this answer



share|improve this answer








edited 6 hours ago

























answered 8 hours ago









Robert HarveyRobert Harvey

167k43386601




167k43386601







  • 2





    I think this was true, until we invented cloud computing. Now one of the two functions effectively costs 4 times as much as the other

    – Ewan
    8 hours ago











  • @Ewan 4 times may not matter until you've measured it to be significant enough to care about. If Foo takes 1 ms and Bar takes 4 ms that's not good. Until you realize that transmitting the data across the network takes 200 ms. At that point, Bar being slower doesn't matter so much. (Just one possible example of where being X times slower doesn't make a noticeable or impactful difference, not meant to be necessarily super realistic.)

    – Becuzz
    7 hours ago






  • 5





    @Ewan If the reduction in the bill saves you $15/month but it will take a $125/hour contractor 4 hours to fix and test it, I could easily justify that not being worth a business's time to do (or at least not do right now if time to market is crucial, etc.). There are always tradeoffs. And what makes sense in one circumstance might not in another.

    – Becuzz
    6 hours ago






  • 2





    your AWS bills are very low indeed

    – Ewan
    6 hours ago






  • 3





    @Ewan AWS rounds to the ceiling by batches anyways (standard is 100ms). Which means this kind of optimization only saves you anything if it consistently avoids pushing you to the next chunk.

    – Delioth
    4 hours ago













  • 2





    I think this was true, until we invented cloud computing. Now one of the two functions effectively costs 4 times as much as the other

    – Ewan
    8 hours ago











  • @Ewan 4 times may not matter until you've measured it to be significant enough to care about. If Foo takes 1 ms and Bar takes 4 ms that's not good. Until you realize that transmitting the data across the network takes 200 ms. At that point, Bar being slower doesn't matter so much. (Just one possible example of where being X times slower doesn't make a noticeable or impactful difference, not meant to be necessarily super realistic.)

    – Becuzz
    7 hours ago






  • 5





    @Ewan If the reduction in the bill saves you $15/month but it will take a $125/hour contractor 4 hours to fix and test it, I could easily justify that not being worth a business's time to do (or at least not do right now if time to market is crucial, etc.). There are always tradeoffs. And what makes sense in one circumstance might not in another.

    – Becuzz
    6 hours ago






  • 2





    your AWS bills are very low indeed

    – Ewan
    6 hours ago






  • 3





    @Ewan AWS rounds to the ceiling by batches anyways (standard is 100ms). Which means this kind of optimization only saves you anything if it consistently avoids pushing you to the next chunk.

    – Delioth
    4 hours ago








2




2





I think this was true, until we invented cloud computing. Now one of the two functions effectively costs 4 times as much as the other

– Ewan
8 hours ago





I think this was true, until we invented cloud computing. Now one of the two functions effectively costs 4 times as much as the other

– Ewan
8 hours ago













@Ewan 4 times may not matter until you've measured it to be significant enough to care about. If Foo takes 1 ms and Bar takes 4 ms that's not good. Until you realize that transmitting the data across the network takes 200 ms. At that point, Bar being slower doesn't matter so much. (Just one possible example of where being X times slower doesn't make a noticeable or impactful difference, not meant to be necessarily super realistic.)

– Becuzz
7 hours ago





@Ewan 4 times may not matter until you've measured it to be significant enough to care about. If Foo takes 1 ms and Bar takes 4 ms that's not good. Until you realize that transmitting the data across the network takes 200 ms. At that point, Bar being slower doesn't matter so much. (Just one possible example of where being X times slower doesn't make a noticeable or impactful difference, not meant to be necessarily super realistic.)

– Becuzz
7 hours ago




5




5





@Ewan If the reduction in the bill saves you $15/month but it will take a $125/hour contractor 4 hours to fix and test it, I could easily justify that not being worth a business's time to do (or at least not do right now if time to market is crucial, etc.). There are always tradeoffs. And what makes sense in one circumstance might not in another.

– Becuzz
6 hours ago





@Ewan If the reduction in the bill saves you $15/month but it will take a $125/hour contractor 4 hours to fix and test it, I could easily justify that not being worth a business's time to do (or at least not do right now if time to market is crucial, etc.). There are always tradeoffs. And what makes sense in one circumstance might not in another.

– Becuzz
6 hours ago




2




2





your AWS bills are very low indeed

– Ewan
6 hours ago





your AWS bills are very low indeed

– Ewan
6 hours ago




3




3





@Ewan AWS rounds to the ceiling by batches anyways (standard is 100ms). Which means this kind of optimization only saves you anything if it consistently avoids pushing you to the next chunk.

– Delioth
4 hours ago






@Ewan AWS rounds to the ceiling by batches anyways (standard is 100ms). Which means this kind of optimization only saves you anything if it consistently avoids pushing you to the next chunk.

– Delioth
4 hours ago


















draft saved

draft discarded
















































Thanks for contributing an answer to Software Engineering Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsoftwareengineering.stackexchange.com%2fquestions%2f390266%2fworking-through-the-single-responsibility-principle-srp-in-python-when-calls-a%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Reverse int within the 32-bit signed integer range: [−2^31, 2^31 − 1]Combining two 32-bit integers into one 64-bit integerDetermine if an int is within rangeLossy packing 32 bit integer to 16 bitComputing the square root of a 64-bit integerKeeping integer addition within boundsSafe multiplication of two 64-bit signed integersLeetcode 10: Regular Expression MatchingSigned integer-to-ascii x86_64 assembler macroReverse the digits of an Integer“Add two numbers given in reverse order from a linked list”

Category:Fedor von Bock Media in category "Fedor von Bock"Navigation menuUpload mediaISNI: 0000 0000 5511 3417VIAF ID: 24712551GND ID: 119294796Library of Congress authority ID: n96068363BnF ID: 12534305fSUDOC authorities ID: 034604189Open Library ID: OL338253ANKCR AUT ID: jn19990000869National Library of Israel ID: 000514068National Thesaurus for Author Names ID: 341574317ReasonatorScholiaStatistics

Kiel Indholdsfortegnelse Historie | Transport og færgeforbindelser | Sejlsport og anden sport | Kultur | Kendte personer fra Kiel | Noter | Litteratur | Eksterne henvisninger | Navigationsmenuwww.kiel.de54°19′31″N 10°8′26″Ø / 54.32528°N 10.14056°Ø / 54.32528; 10.14056Oberbürgermeister Dr. Ulf Kämpferwww.statistik-nord.deDen danske Stats StatistikKiels hjemmesiderrrWorldCat312794080n790547494030481-4