How to check whether existing code base can deadlockHow to you solve the problem of implicit locking and parallel execution?Thread class design?Is Using Locking/Unlocking in Version Control an Anti-Pattern?Are mutexes assigned to specific regions of memory?How to avoid redundant code in designing inheritance in C++How to Keep Track of Thread Safe Code in a mostly Thread Unsafe Legacy Rich C++ Code BaseScaling locks in high concurrency web appsWhy don't we see (more) widespread adoption of lock-free dynamic memory allocators?Simple solution for calling a function only on one thread, queuing waiting calls?Why would CPython logging use a Lock for each handler rather than one Lock per Logger?

Should I worry about having my credit pulled multiple times while car shopping?

SQL Server has encountered occurences of I/O requests taking longer than 15 seconds

Was the Lonely Mountain, where Smaug lived, a volcano?

Cremated People Pottery

Is it possible to install Firefox on Ubuntu with no desktop enviroment?

Can an open source licence be revoked if it violates employer's IP?

At what temperature should the earth be cooked to prevent human infection?

Using roof rails to set up hammock

What does the output current rating from an H-Bridge's datasheet really mean?

Sci fi/fantasy book, people stranded on a planet where tech doesn't work, magic mist

Is there a term for someone whose preferred policies are a mix of Left and Right?

Should I email my professor to clear up a (possibly very irrelevant) awkward misunderstanding?

How to ask if I can mow my neighbor's lawn

How can I improve readability and length of a method with many if statements?

Why does MAGMA claim that the automorphism group of an elliptic curve is order 24 when it is order 12?

Is it a bad idea to have an pen name within only an initial for a surname?

Are there any rules for identifying what spell an opponent is casting?

Interview was just a one hour panel. Got an offer the next day; do I accept or is this a red flag?

Why is gun control associated with the socially liberal Democratic party?

Is it unethical to quit my job during company crisis?

How to address players struggling with simple controls?

Print the phrase "And she said, 'But that's his.'" using only the alphabet

Does an African-American baby born in Youngstown, Ohio have a higher infant mortality rate than a baby born in Iran?

The last tree in the Universe



How to check whether existing code base can deadlock


How to you solve the problem of implicit locking and parallel execution?Thread class design?Is Using Locking/Unlocking in Version Control an Anti-Pattern?Are mutexes assigned to specific regions of memory?How to avoid redundant code in designing inheritance in C++How to Keep Track of Thread Safe Code in a mostly Thread Unsafe Legacy Rich C++ Code BaseScaling locks in high concurrency web appsWhy don't we see (more) widespread adoption of lock-free dynamic memory allocators?Simple solution for calling a function only on one thread, queuing waiting calls?Why would CPython logging use a Lock for each handler rather than one Lock per Logger?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








4















I'm improving/expanding a somewhat large code base, and I've introduced multithreading into it. But with the possibility of introducing code that could deadlock, which is nigh impossible to test for with black box testing alone, I want to at least get some reassurance my code is correct. I've already adhered to best practices, like never have more than one lock locked at a time, always lock multiple locks in the same order, etc. But there are situations where my code could loop back upon itself in convoluted ways, which makes this hard to enforce.



There are three ways I see that might achieve this;



1. Heavy code review sessions with lots of eyes and lots of notes being taken.



I feel that this could however easily miss a few interactions. And even if the current code base is proven to be correct, a few weeks/months of tweaking by different developers would make all notes and proofs obsolete again, necessitating a new review cycle, which is extremely impractical.



2: Employ software to extract a call graph out of my code base.



I've already tried a few tools, and while they'll produce nice call graphs, it still takes manual inspection to see which functions lock what, and how that bubbles up through the call graph. And again, as with the option above: change the code base, and it's necessary to redo the checks again.



3: Add tags to all lock-taking functions, and bubble up these tags in the function call stack.



The idea being that every unique lock gets it's own tag, and functions taking more than one lock should be tagged with all these tags. This way (potentially) incompatible function calls should stand out like a sore thumb.



This one looks promising in that the code base is now self-explanatory, so that changes to the code do not necessitate a new code review (well, as long as the people making the changes take care to adhere to the tagging of course). But most function names would become very unwieldy, certainly the more top level ones which gather a lot of cruft due to the amount of lower level locking functions they call. Plus that (in my case) some functions just can't be tagged this way (C++ constructors and overloaded operators e.g.), so I'd have to forego these constructs and add explicit "Init()" and "Copy()" functions all over the place...



Does anyone have a better approach to this? Or is a code review and adhering to best practices the best I can do? If it matters, I'm using C++ here, but not the .Net version (so no reflection and such).










share|improve this question
























  • Is there a particular reason you only are concerned about deadlocks and not other multi-threading issues such as race conditions?

    – JimmyJames
    May 30 at 17:17











  • I think I already set up the code to be invulnerable to race conditions; I mostly package the data together with it's lock, hide it behind a Get method, and exclusively use RAII locker objects to control temporary access. It's more threads interacting with each other that I fear I have lost oversight of, where each thread could hold more than one lock shared between them...

    – Carl Colijn
    May 30 at 17:29











  • Even the strongest developers have a hard time getting multi-threading right. I would highly recommend you treat your solution with a great deal of suspicion. The best attitude is to assume it's wrong and try to prove otherwise. I don't mean this as a slight. Even if you are the greatest multi-threading programmer the world has ever seen, there are likely problems with your code. C++ is especially unforgiving for this, to my knowledge/recollection.

    – JimmyJames
    May 30 at 18:10











  • It seems that C++ 11 added a memory model so the situation is less dire than my ancient experience with this.

    – JimmyJames
    May 30 at 18:38











  • I wonder if ScenGen could be of any use to you. (No affiliation)

    – Dom
    May 30 at 20:19

















4















I'm improving/expanding a somewhat large code base, and I've introduced multithreading into it. But with the possibility of introducing code that could deadlock, which is nigh impossible to test for with black box testing alone, I want to at least get some reassurance my code is correct. I've already adhered to best practices, like never have more than one lock locked at a time, always lock multiple locks in the same order, etc. But there are situations where my code could loop back upon itself in convoluted ways, which makes this hard to enforce.



There are three ways I see that might achieve this;



1. Heavy code review sessions with lots of eyes and lots of notes being taken.



I feel that this could however easily miss a few interactions. And even if the current code base is proven to be correct, a few weeks/months of tweaking by different developers would make all notes and proofs obsolete again, necessitating a new review cycle, which is extremely impractical.



2: Employ software to extract a call graph out of my code base.



I've already tried a few tools, and while they'll produce nice call graphs, it still takes manual inspection to see which functions lock what, and how that bubbles up through the call graph. And again, as with the option above: change the code base, and it's necessary to redo the checks again.



3: Add tags to all lock-taking functions, and bubble up these tags in the function call stack.



The idea being that every unique lock gets it's own tag, and functions taking more than one lock should be tagged with all these tags. This way (potentially) incompatible function calls should stand out like a sore thumb.



This one looks promising in that the code base is now self-explanatory, so that changes to the code do not necessitate a new code review (well, as long as the people making the changes take care to adhere to the tagging of course). But most function names would become very unwieldy, certainly the more top level ones which gather a lot of cruft due to the amount of lower level locking functions they call. Plus that (in my case) some functions just can't be tagged this way (C++ constructors and overloaded operators e.g.), so I'd have to forego these constructs and add explicit "Init()" and "Copy()" functions all over the place...



Does anyone have a better approach to this? Or is a code review and adhering to best practices the best I can do? If it matters, I'm using C++ here, but not the .Net version (so no reflection and such).










share|improve this question
























  • Is there a particular reason you only are concerned about deadlocks and not other multi-threading issues such as race conditions?

    – JimmyJames
    May 30 at 17:17











  • I think I already set up the code to be invulnerable to race conditions; I mostly package the data together with it's lock, hide it behind a Get method, and exclusively use RAII locker objects to control temporary access. It's more threads interacting with each other that I fear I have lost oversight of, where each thread could hold more than one lock shared between them...

    – Carl Colijn
    May 30 at 17:29











  • Even the strongest developers have a hard time getting multi-threading right. I would highly recommend you treat your solution with a great deal of suspicion. The best attitude is to assume it's wrong and try to prove otherwise. I don't mean this as a slight. Even if you are the greatest multi-threading programmer the world has ever seen, there are likely problems with your code. C++ is especially unforgiving for this, to my knowledge/recollection.

    – JimmyJames
    May 30 at 18:10











  • It seems that C++ 11 added a memory model so the situation is less dire than my ancient experience with this.

    – JimmyJames
    May 30 at 18:38











  • I wonder if ScenGen could be of any use to you. (No affiliation)

    – Dom
    May 30 at 20:19













4












4








4








I'm improving/expanding a somewhat large code base, and I've introduced multithreading into it. But with the possibility of introducing code that could deadlock, which is nigh impossible to test for with black box testing alone, I want to at least get some reassurance my code is correct. I've already adhered to best practices, like never have more than one lock locked at a time, always lock multiple locks in the same order, etc. But there are situations where my code could loop back upon itself in convoluted ways, which makes this hard to enforce.



There are three ways I see that might achieve this;



1. Heavy code review sessions with lots of eyes and lots of notes being taken.



I feel that this could however easily miss a few interactions. And even if the current code base is proven to be correct, a few weeks/months of tweaking by different developers would make all notes and proofs obsolete again, necessitating a new review cycle, which is extremely impractical.



2: Employ software to extract a call graph out of my code base.



I've already tried a few tools, and while they'll produce nice call graphs, it still takes manual inspection to see which functions lock what, and how that bubbles up through the call graph. And again, as with the option above: change the code base, and it's necessary to redo the checks again.



3: Add tags to all lock-taking functions, and bubble up these tags in the function call stack.



The idea being that every unique lock gets it's own tag, and functions taking more than one lock should be tagged with all these tags. This way (potentially) incompatible function calls should stand out like a sore thumb.



This one looks promising in that the code base is now self-explanatory, so that changes to the code do not necessitate a new code review (well, as long as the people making the changes take care to adhere to the tagging of course). But most function names would become very unwieldy, certainly the more top level ones which gather a lot of cruft due to the amount of lower level locking functions they call. Plus that (in my case) some functions just can't be tagged this way (C++ constructors and overloaded operators e.g.), so I'd have to forego these constructs and add explicit "Init()" and "Copy()" functions all over the place...



Does anyone have a better approach to this? Or is a code review and adhering to best practices the best I can do? If it matters, I'm using C++ here, but not the .Net version (so no reflection and such).










share|improve this question
















I'm improving/expanding a somewhat large code base, and I've introduced multithreading into it. But with the possibility of introducing code that could deadlock, which is nigh impossible to test for with black box testing alone, I want to at least get some reassurance my code is correct. I've already adhered to best practices, like never have more than one lock locked at a time, always lock multiple locks in the same order, etc. But there are situations where my code could loop back upon itself in convoluted ways, which makes this hard to enforce.



There are three ways I see that might achieve this;



1. Heavy code review sessions with lots of eyes and lots of notes being taken.



I feel that this could however easily miss a few interactions. And even if the current code base is proven to be correct, a few weeks/months of tweaking by different developers would make all notes and proofs obsolete again, necessitating a new review cycle, which is extremely impractical.



2: Employ software to extract a call graph out of my code base.



I've already tried a few tools, and while they'll produce nice call graphs, it still takes manual inspection to see which functions lock what, and how that bubbles up through the call graph. And again, as with the option above: change the code base, and it's necessary to redo the checks again.



3: Add tags to all lock-taking functions, and bubble up these tags in the function call stack.



The idea being that every unique lock gets it's own tag, and functions taking more than one lock should be tagged with all these tags. This way (potentially) incompatible function calls should stand out like a sore thumb.



This one looks promising in that the code base is now self-explanatory, so that changes to the code do not necessitate a new code review (well, as long as the people making the changes take care to adhere to the tagging of course). But most function names would become very unwieldy, certainly the more top level ones which gather a lot of cruft due to the amount of lower level locking functions they call. Plus that (in my case) some functions just can't be tagged this way (C++ constructors and overloaded operators e.g.), so I'd have to forego these constructs and add explicit "Init()" and "Copy()" functions all over the place...



Does anyone have a better approach to this? Or is a code review and adhering to best practices the best I can do? If it matters, I'm using C++ here, but not the .Net version (so no reflection and such).







c++ multithreading locks






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited May 30 at 17:03







Carl Colijn

















asked May 30 at 11:10









Carl ColijnCarl Colijn

1266




1266












  • Is there a particular reason you only are concerned about deadlocks and not other multi-threading issues such as race conditions?

    – JimmyJames
    May 30 at 17:17











  • I think I already set up the code to be invulnerable to race conditions; I mostly package the data together with it's lock, hide it behind a Get method, and exclusively use RAII locker objects to control temporary access. It's more threads interacting with each other that I fear I have lost oversight of, where each thread could hold more than one lock shared between them...

    – Carl Colijn
    May 30 at 17:29











  • Even the strongest developers have a hard time getting multi-threading right. I would highly recommend you treat your solution with a great deal of suspicion. The best attitude is to assume it's wrong and try to prove otherwise. I don't mean this as a slight. Even if you are the greatest multi-threading programmer the world has ever seen, there are likely problems with your code. C++ is especially unforgiving for this, to my knowledge/recollection.

    – JimmyJames
    May 30 at 18:10











  • It seems that C++ 11 added a memory model so the situation is less dire than my ancient experience with this.

    – JimmyJames
    May 30 at 18:38











  • I wonder if ScenGen could be of any use to you. (No affiliation)

    – Dom
    May 30 at 20:19

















  • Is there a particular reason you only are concerned about deadlocks and not other multi-threading issues such as race conditions?

    – JimmyJames
    May 30 at 17:17











  • I think I already set up the code to be invulnerable to race conditions; I mostly package the data together with it's lock, hide it behind a Get method, and exclusively use RAII locker objects to control temporary access. It's more threads interacting with each other that I fear I have lost oversight of, where each thread could hold more than one lock shared between them...

    – Carl Colijn
    May 30 at 17:29











  • Even the strongest developers have a hard time getting multi-threading right. I would highly recommend you treat your solution with a great deal of suspicion. The best attitude is to assume it's wrong and try to prove otherwise. I don't mean this as a slight. Even if you are the greatest multi-threading programmer the world has ever seen, there are likely problems with your code. C++ is especially unforgiving for this, to my knowledge/recollection.

    – JimmyJames
    May 30 at 18:10











  • It seems that C++ 11 added a memory model so the situation is less dire than my ancient experience with this.

    – JimmyJames
    May 30 at 18:38











  • I wonder if ScenGen could be of any use to you. (No affiliation)

    – Dom
    May 30 at 20:19
















Is there a particular reason you only are concerned about deadlocks and not other multi-threading issues such as race conditions?

– JimmyJames
May 30 at 17:17





Is there a particular reason you only are concerned about deadlocks and not other multi-threading issues such as race conditions?

– JimmyJames
May 30 at 17:17













I think I already set up the code to be invulnerable to race conditions; I mostly package the data together with it's lock, hide it behind a Get method, and exclusively use RAII locker objects to control temporary access. It's more threads interacting with each other that I fear I have lost oversight of, where each thread could hold more than one lock shared between them...

– Carl Colijn
May 30 at 17:29





I think I already set up the code to be invulnerable to race conditions; I mostly package the data together with it's lock, hide it behind a Get method, and exclusively use RAII locker objects to control temporary access. It's more threads interacting with each other that I fear I have lost oversight of, where each thread could hold more than one lock shared between them...

– Carl Colijn
May 30 at 17:29













Even the strongest developers have a hard time getting multi-threading right. I would highly recommend you treat your solution with a great deal of suspicion. The best attitude is to assume it's wrong and try to prove otherwise. I don't mean this as a slight. Even if you are the greatest multi-threading programmer the world has ever seen, there are likely problems with your code. C++ is especially unforgiving for this, to my knowledge/recollection.

– JimmyJames
May 30 at 18:10





Even the strongest developers have a hard time getting multi-threading right. I would highly recommend you treat your solution with a great deal of suspicion. The best attitude is to assume it's wrong and try to prove otherwise. I don't mean this as a slight. Even if you are the greatest multi-threading programmer the world has ever seen, there are likely problems with your code. C++ is especially unforgiving for this, to my knowledge/recollection.

– JimmyJames
May 30 at 18:10













It seems that C++ 11 added a memory model so the situation is less dire than my ancient experience with this.

– JimmyJames
May 30 at 18:38





It seems that C++ 11 added a memory model so the situation is less dire than my ancient experience with this.

– JimmyJames
May 30 at 18:38













I wonder if ScenGen could be of any use to you. (No affiliation)

– Dom
May 30 at 20:19





I wonder if ScenGen could be of any use to you. (No affiliation)

– Dom
May 30 at 20:19










4 Answers
4






active

oldest

votes


















7















I've already adhered to best practices, like never have more than one lock locked at a time, always lock multiple locks in the same order, etc




Well, if you only hold one lock at a time, the second case never arises.



If you do actually have to acquire multiple locks, then reducing the number of locks (ideally to one) is better than making sure to acquire them in the right order.




But there are situations where my code could loop back upon itself in convoluted ways,




if you have complex logic sharing data, you're going to have a hard time.



The easiest way to make your multi-threaded code correct is to not share data in the first place: factor out components that can run asynchronously on their own data, and communicate with them via message queues.



The second easiest is to split your code into tasks, run using std::async or in your own threadpool with std::packaged_task or similar - it's more general than the thread-per-component style, but you have to take some care about inter-task dependencies (and the scheduling overhead can dominate if you have lots of very fine-grained tasks).



In both cases you're specifically trying to package a chunk of data, and the only code that can operate on it, together. Communication then happens by explicitly passing ownership via message queues or promises/futures.



Note that even if you can get your complex-code-operating-on-shared-data correct (and, even less likely, prove that it's correct to some reasonable standard), all the locking, un-locking, stalling and cache misses often eat away at any hoped-for performance improvement.






share|improve this answer

























  • Quick reassurance: performance is not an issue here (plenty of cycles to burn, relatively speaking).

    – Carl Colijn
    May 30 at 17:00











  • For the rest: I've indeed already replaced some combined lock-and-get methods by a quick temporary internal lock and returning a copy instead, but this is not feasible in all situations, since data does need to be updated real time in all places. And I mostly always acquire only one lock at a time, but in some circumstances I need more, and the code doing so might be able to indirectly circle back upon itself or on other paths/threads also needing these already acquired locks, so order is thus also sometimes an issue.

    – Carl Colijn
    May 30 at 17:00











  • It's hard to reconcile "performance is not an issue" with "data does need to be updated in real time in all places", because if you can afford to be a little slow, you can probably afford to push all mutations through a single thread to induce total ordering with no synchronization but the message queue. It's obviously hard to discuss specifics without a lot of detail, though.

    – Useless
    May 30 at 20:34











  • Ah, sorry for the confusion (wrong terminology there); real-time not as in split-second accurate, but if some shared data changes all threads using it should pick up on it next time they would use it (as opposed to them getting away with stale cached copies).

    – Carl Colijn
    May 30 at 21:12


















1














If you're not already doing so, strict use of RAII can help make locking more deterministic. The only functions that are allowed to lock anything are constructors, and the only functions that unlock are destructors.



This may well end up with creating a "Locker" class, whose only function is to manage one lock. Creating a Locker object locks the resource, and destroying the object unlocks it again. If a function wants to use a protected object, simply create a Locker object as a local variable in that function. RAII takes care of the rest.



Strict use of RAII prevents locks being left locked by accident, particularly if the code takes an unexpected path. That includes exceptions.






share|improve this answer























  • Thanks for the excellent suggestion! I already employ RAII as much as possible though, and have several 'Locker' classes at work already :)

    – Carl Colijn
    May 30 at 13:02


















0














Modeling with proof assistant, such as TLA+.  You can model behaviors, protocols, algorithms, and it can search for deadlocks, race conditions, etc..  There are a number of guides on the web, such as https://learntla.com/introduction



The big cloud service providers use TLA+ to verify their systems — they've found very obscure bugs in lots of areas doing this, above and beyond mere testing. See http://cacm.acm.org/magazines/2015/4/184701-how-amazon-web-services-uses-formal-methods/fulltext






share|improve this answer























  • Oofff... I see you picked up my mention of "proving" there :) But it's more that I want a "good enough" estimate that the code is fine, not a formal bulletproof proof. TIA+ certainly is nice, but a bit overkill here; it would mean we'd have to write our code base again in pseudo code for the checker to check.

    – Carl Colijn
    May 30 at 16:51











  • @CarlColijn I was considering mention that the title of your question didn't match the body, because you aren't actually asking for a proof.

    – Trixie Wolf
    May 30 at 16:59











  • @TrixieWolf: good call -- I've updated the title to match. But I'm glad Erik added this answer; it's too late for my situation, but when I start a new project from scratch it might be useful to have this from the start.

    – Carl Colijn
    May 30 at 17:04


















0














You might find thread safety analysis annotations of interest. Provided you can structure your code so the model fits, it provides a cheap way to check that things are OK. (The checks are also in gcc.)






share|improve this answer























  • Thanks for that! There are some annotations that check for deadlock, but they all revolve around me specifying that a specific order of locking should be followed or that the caller may not hold certain locks. It seems this only checks for deadlock within a tread, and not so much between threads? In my case the problem is more that I have multiple processes all interacting in (somewhat) hard to determine ways, and they all need to access diverse data sources, sometimes more than one at a time.

    – Carl Colijn
    May 30 at 16:49











  • These locks are across threads (as they have to be), so when you annotate that a function/method can't be called when such a lock is held (or that a lock must be held when it's called) then that's a global assertion, not one within a thread. If you have a situation where a method sometimes needs a lock and sometimes doesn't then the annotations won't help. (If you're starting from scratch maybe that says your program structure needs changing, but I understand completely that for an existing codebase that might not be an option.)

    – Bruce Stephens
    May 30 at 17:23











  • The docu for EXCLUDES e.g. mentions This annotation is used to prevent deadlock. Many mutex implementations are not re-entrant, so deadlock can occur if the function acquires the mutex a second time. Re-entrancy is not really an issue in my case; it's more interprocess 2-lock-wrong-order deadlock I'm worried about. And indeed some functions need to lock 2 locks in succession, and locking is often only required on some code paths (e.g. syncing of shared data if needed and/or if it is ready).

    – Carl Colijn
    May 30 at 17:39











  • You're right, I think I confused myself. Generally, the annotations do require a particular (simple) structure of lock usage. If a program can be arranged to do that then the annotations provide a cheap way to check (and so maintain) correctness, but it sounds like your existing codebase (like many) may well not be suitable.

    – Bruce Stephens
    May 30 at 18:30











Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "131"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsoftwareengineering.stackexchange.com%2fquestions%2f392613%2fhow-to-check-whether-existing-code-base-can-deadlock%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























4 Answers
4






active

oldest

votes








4 Answers
4






active

oldest

votes









active

oldest

votes






active

oldest

votes









7















I've already adhered to best practices, like never have more than one lock locked at a time, always lock multiple locks in the same order, etc




Well, if you only hold one lock at a time, the second case never arises.



If you do actually have to acquire multiple locks, then reducing the number of locks (ideally to one) is better than making sure to acquire them in the right order.




But there are situations where my code could loop back upon itself in convoluted ways,




if you have complex logic sharing data, you're going to have a hard time.



The easiest way to make your multi-threaded code correct is to not share data in the first place: factor out components that can run asynchronously on their own data, and communicate with them via message queues.



The second easiest is to split your code into tasks, run using std::async or in your own threadpool with std::packaged_task or similar - it's more general than the thread-per-component style, but you have to take some care about inter-task dependencies (and the scheduling overhead can dominate if you have lots of very fine-grained tasks).



In both cases you're specifically trying to package a chunk of data, and the only code that can operate on it, together. Communication then happens by explicitly passing ownership via message queues or promises/futures.



Note that even if you can get your complex-code-operating-on-shared-data correct (and, even less likely, prove that it's correct to some reasonable standard), all the locking, un-locking, stalling and cache misses often eat away at any hoped-for performance improvement.






share|improve this answer

























  • Quick reassurance: performance is not an issue here (plenty of cycles to burn, relatively speaking).

    – Carl Colijn
    May 30 at 17:00











  • For the rest: I've indeed already replaced some combined lock-and-get methods by a quick temporary internal lock and returning a copy instead, but this is not feasible in all situations, since data does need to be updated real time in all places. And I mostly always acquire only one lock at a time, but in some circumstances I need more, and the code doing so might be able to indirectly circle back upon itself or on other paths/threads also needing these already acquired locks, so order is thus also sometimes an issue.

    – Carl Colijn
    May 30 at 17:00











  • It's hard to reconcile "performance is not an issue" with "data does need to be updated in real time in all places", because if you can afford to be a little slow, you can probably afford to push all mutations through a single thread to induce total ordering with no synchronization but the message queue. It's obviously hard to discuss specifics without a lot of detail, though.

    – Useless
    May 30 at 20:34











  • Ah, sorry for the confusion (wrong terminology there); real-time not as in split-second accurate, but if some shared data changes all threads using it should pick up on it next time they would use it (as opposed to them getting away with stale cached copies).

    – Carl Colijn
    May 30 at 21:12















7















I've already adhered to best practices, like never have more than one lock locked at a time, always lock multiple locks in the same order, etc




Well, if you only hold one lock at a time, the second case never arises.



If you do actually have to acquire multiple locks, then reducing the number of locks (ideally to one) is better than making sure to acquire them in the right order.




But there are situations where my code could loop back upon itself in convoluted ways,




if you have complex logic sharing data, you're going to have a hard time.



The easiest way to make your multi-threaded code correct is to not share data in the first place: factor out components that can run asynchronously on their own data, and communicate with them via message queues.



The second easiest is to split your code into tasks, run using std::async or in your own threadpool with std::packaged_task or similar - it's more general than the thread-per-component style, but you have to take some care about inter-task dependencies (and the scheduling overhead can dominate if you have lots of very fine-grained tasks).



In both cases you're specifically trying to package a chunk of data, and the only code that can operate on it, together. Communication then happens by explicitly passing ownership via message queues or promises/futures.



Note that even if you can get your complex-code-operating-on-shared-data correct (and, even less likely, prove that it's correct to some reasonable standard), all the locking, un-locking, stalling and cache misses often eat away at any hoped-for performance improvement.






share|improve this answer

























  • Quick reassurance: performance is not an issue here (plenty of cycles to burn, relatively speaking).

    – Carl Colijn
    May 30 at 17:00











  • For the rest: I've indeed already replaced some combined lock-and-get methods by a quick temporary internal lock and returning a copy instead, but this is not feasible in all situations, since data does need to be updated real time in all places. And I mostly always acquire only one lock at a time, but in some circumstances I need more, and the code doing so might be able to indirectly circle back upon itself or on other paths/threads also needing these already acquired locks, so order is thus also sometimes an issue.

    – Carl Colijn
    May 30 at 17:00











  • It's hard to reconcile "performance is not an issue" with "data does need to be updated in real time in all places", because if you can afford to be a little slow, you can probably afford to push all mutations through a single thread to induce total ordering with no synchronization but the message queue. It's obviously hard to discuss specifics without a lot of detail, though.

    – Useless
    May 30 at 20:34











  • Ah, sorry for the confusion (wrong terminology there); real-time not as in split-second accurate, but if some shared data changes all threads using it should pick up on it next time they would use it (as opposed to them getting away with stale cached copies).

    – Carl Colijn
    May 30 at 21:12













7












7








7








I've already adhered to best practices, like never have more than one lock locked at a time, always lock multiple locks in the same order, etc




Well, if you only hold one lock at a time, the second case never arises.



If you do actually have to acquire multiple locks, then reducing the number of locks (ideally to one) is better than making sure to acquire them in the right order.




But there are situations where my code could loop back upon itself in convoluted ways,




if you have complex logic sharing data, you're going to have a hard time.



The easiest way to make your multi-threaded code correct is to not share data in the first place: factor out components that can run asynchronously on their own data, and communicate with them via message queues.



The second easiest is to split your code into tasks, run using std::async or in your own threadpool with std::packaged_task or similar - it's more general than the thread-per-component style, but you have to take some care about inter-task dependencies (and the scheduling overhead can dominate if you have lots of very fine-grained tasks).



In both cases you're specifically trying to package a chunk of data, and the only code that can operate on it, together. Communication then happens by explicitly passing ownership via message queues or promises/futures.



Note that even if you can get your complex-code-operating-on-shared-data correct (and, even less likely, prove that it's correct to some reasonable standard), all the locking, un-locking, stalling and cache misses often eat away at any hoped-for performance improvement.






share|improve this answer
















I've already adhered to best practices, like never have more than one lock locked at a time, always lock multiple locks in the same order, etc




Well, if you only hold one lock at a time, the second case never arises.



If you do actually have to acquire multiple locks, then reducing the number of locks (ideally to one) is better than making sure to acquire them in the right order.




But there are situations where my code could loop back upon itself in convoluted ways,




if you have complex logic sharing data, you're going to have a hard time.



The easiest way to make your multi-threaded code correct is to not share data in the first place: factor out components that can run asynchronously on their own data, and communicate with them via message queues.



The second easiest is to split your code into tasks, run using std::async or in your own threadpool with std::packaged_task or similar - it's more general than the thread-per-component style, but you have to take some care about inter-task dependencies (and the scheduling overhead can dominate if you have lots of very fine-grained tasks).



In both cases you're specifically trying to package a chunk of data, and the only code that can operate on it, together. Communication then happens by explicitly passing ownership via message queues or promises/futures.



Note that even if you can get your complex-code-operating-on-shared-data correct (and, even less likely, prove that it's correct to some reasonable standard), all the locking, un-locking, stalling and cache misses often eat away at any hoped-for performance improvement.







share|improve this answer














share|improve this answer



share|improve this answer








edited May 30 at 14:27

























answered May 30 at 13:41









UselessUseless

9,35922139




9,35922139












  • Quick reassurance: performance is not an issue here (plenty of cycles to burn, relatively speaking).

    – Carl Colijn
    May 30 at 17:00











  • For the rest: I've indeed already replaced some combined lock-and-get methods by a quick temporary internal lock and returning a copy instead, but this is not feasible in all situations, since data does need to be updated real time in all places. And I mostly always acquire only one lock at a time, but in some circumstances I need more, and the code doing so might be able to indirectly circle back upon itself or on other paths/threads also needing these already acquired locks, so order is thus also sometimes an issue.

    – Carl Colijn
    May 30 at 17:00











  • It's hard to reconcile "performance is not an issue" with "data does need to be updated in real time in all places", because if you can afford to be a little slow, you can probably afford to push all mutations through a single thread to induce total ordering with no synchronization but the message queue. It's obviously hard to discuss specifics without a lot of detail, though.

    – Useless
    May 30 at 20:34











  • Ah, sorry for the confusion (wrong terminology there); real-time not as in split-second accurate, but if some shared data changes all threads using it should pick up on it next time they would use it (as opposed to them getting away with stale cached copies).

    – Carl Colijn
    May 30 at 21:12

















  • Quick reassurance: performance is not an issue here (plenty of cycles to burn, relatively speaking).

    – Carl Colijn
    May 30 at 17:00











  • For the rest: I've indeed already replaced some combined lock-and-get methods by a quick temporary internal lock and returning a copy instead, but this is not feasible in all situations, since data does need to be updated real time in all places. And I mostly always acquire only one lock at a time, but in some circumstances I need more, and the code doing so might be able to indirectly circle back upon itself or on other paths/threads also needing these already acquired locks, so order is thus also sometimes an issue.

    – Carl Colijn
    May 30 at 17:00











  • It's hard to reconcile "performance is not an issue" with "data does need to be updated in real time in all places", because if you can afford to be a little slow, you can probably afford to push all mutations through a single thread to induce total ordering with no synchronization but the message queue. It's obviously hard to discuss specifics without a lot of detail, though.

    – Useless
    May 30 at 20:34











  • Ah, sorry for the confusion (wrong terminology there); real-time not as in split-second accurate, but if some shared data changes all threads using it should pick up on it next time they would use it (as opposed to them getting away with stale cached copies).

    – Carl Colijn
    May 30 at 21:12
















Quick reassurance: performance is not an issue here (plenty of cycles to burn, relatively speaking).

– Carl Colijn
May 30 at 17:00





Quick reassurance: performance is not an issue here (plenty of cycles to burn, relatively speaking).

– Carl Colijn
May 30 at 17:00













For the rest: I've indeed already replaced some combined lock-and-get methods by a quick temporary internal lock and returning a copy instead, but this is not feasible in all situations, since data does need to be updated real time in all places. And I mostly always acquire only one lock at a time, but in some circumstances I need more, and the code doing so might be able to indirectly circle back upon itself or on other paths/threads also needing these already acquired locks, so order is thus also sometimes an issue.

– Carl Colijn
May 30 at 17:00





For the rest: I've indeed already replaced some combined lock-and-get methods by a quick temporary internal lock and returning a copy instead, but this is not feasible in all situations, since data does need to be updated real time in all places. And I mostly always acquire only one lock at a time, but in some circumstances I need more, and the code doing so might be able to indirectly circle back upon itself or on other paths/threads also needing these already acquired locks, so order is thus also sometimes an issue.

– Carl Colijn
May 30 at 17:00













It's hard to reconcile "performance is not an issue" with "data does need to be updated in real time in all places", because if you can afford to be a little slow, you can probably afford to push all mutations through a single thread to induce total ordering with no synchronization but the message queue. It's obviously hard to discuss specifics without a lot of detail, though.

– Useless
May 30 at 20:34





It's hard to reconcile "performance is not an issue" with "data does need to be updated in real time in all places", because if you can afford to be a little slow, you can probably afford to push all mutations through a single thread to induce total ordering with no synchronization but the message queue. It's obviously hard to discuss specifics without a lot of detail, though.

– Useless
May 30 at 20:34













Ah, sorry for the confusion (wrong terminology there); real-time not as in split-second accurate, but if some shared data changes all threads using it should pick up on it next time they would use it (as opposed to them getting away with stale cached copies).

– Carl Colijn
May 30 at 21:12





Ah, sorry for the confusion (wrong terminology there); real-time not as in split-second accurate, but if some shared data changes all threads using it should pick up on it next time they would use it (as opposed to them getting away with stale cached copies).

– Carl Colijn
May 30 at 21:12













1














If you're not already doing so, strict use of RAII can help make locking more deterministic. The only functions that are allowed to lock anything are constructors, and the only functions that unlock are destructors.



This may well end up with creating a "Locker" class, whose only function is to manage one lock. Creating a Locker object locks the resource, and destroying the object unlocks it again. If a function wants to use a protected object, simply create a Locker object as a local variable in that function. RAII takes care of the rest.



Strict use of RAII prevents locks being left locked by accident, particularly if the code takes an unexpected path. That includes exceptions.






share|improve this answer























  • Thanks for the excellent suggestion! I already employ RAII as much as possible though, and have several 'Locker' classes at work already :)

    – Carl Colijn
    May 30 at 13:02















1














If you're not already doing so, strict use of RAII can help make locking more deterministic. The only functions that are allowed to lock anything are constructors, and the only functions that unlock are destructors.



This may well end up with creating a "Locker" class, whose only function is to manage one lock. Creating a Locker object locks the resource, and destroying the object unlocks it again. If a function wants to use a protected object, simply create a Locker object as a local variable in that function. RAII takes care of the rest.



Strict use of RAII prevents locks being left locked by accident, particularly if the code takes an unexpected path. That includes exceptions.






share|improve this answer























  • Thanks for the excellent suggestion! I already employ RAII as much as possible though, and have several 'Locker' classes at work already :)

    – Carl Colijn
    May 30 at 13:02













1












1








1







If you're not already doing so, strict use of RAII can help make locking more deterministic. The only functions that are allowed to lock anything are constructors, and the only functions that unlock are destructors.



This may well end up with creating a "Locker" class, whose only function is to manage one lock. Creating a Locker object locks the resource, and destroying the object unlocks it again. If a function wants to use a protected object, simply create a Locker object as a local variable in that function. RAII takes care of the rest.



Strict use of RAII prevents locks being left locked by accident, particularly if the code takes an unexpected path. That includes exceptions.






share|improve this answer













If you're not already doing so, strict use of RAII can help make locking more deterministic. The only functions that are allowed to lock anything are constructors, and the only functions that unlock are destructors.



This may well end up with creating a "Locker" class, whose only function is to manage one lock. Creating a Locker object locks the resource, and destroying the object unlocks it again. If a function wants to use a protected object, simply create a Locker object as a local variable in that function. RAII takes care of the rest.



Strict use of RAII prevents locks being left locked by accident, particularly if the code takes an unexpected path. That includes exceptions.







share|improve this answer












share|improve this answer



share|improve this answer










answered May 30 at 12:29









Simon BSimon B

5,31221628




5,31221628












  • Thanks for the excellent suggestion! I already employ RAII as much as possible though, and have several 'Locker' classes at work already :)

    – Carl Colijn
    May 30 at 13:02

















  • Thanks for the excellent suggestion! I already employ RAII as much as possible though, and have several 'Locker' classes at work already :)

    – Carl Colijn
    May 30 at 13:02
















Thanks for the excellent suggestion! I already employ RAII as much as possible though, and have several 'Locker' classes at work already :)

– Carl Colijn
May 30 at 13:02





Thanks for the excellent suggestion! I already employ RAII as much as possible though, and have several 'Locker' classes at work already :)

– Carl Colijn
May 30 at 13:02











0














Modeling with proof assistant, such as TLA+.  You can model behaviors, protocols, algorithms, and it can search for deadlocks, race conditions, etc..  There are a number of guides on the web, such as https://learntla.com/introduction



The big cloud service providers use TLA+ to verify their systems — they've found very obscure bugs in lots of areas doing this, above and beyond mere testing. See http://cacm.acm.org/magazines/2015/4/184701-how-amazon-web-services-uses-formal-methods/fulltext






share|improve this answer























  • Oofff... I see you picked up my mention of "proving" there :) But it's more that I want a "good enough" estimate that the code is fine, not a formal bulletproof proof. TIA+ certainly is nice, but a bit overkill here; it would mean we'd have to write our code base again in pseudo code for the checker to check.

    – Carl Colijn
    May 30 at 16:51











  • @CarlColijn I was considering mention that the title of your question didn't match the body, because you aren't actually asking for a proof.

    – Trixie Wolf
    May 30 at 16:59











  • @TrixieWolf: good call -- I've updated the title to match. But I'm glad Erik added this answer; it's too late for my situation, but when I start a new project from scratch it might be useful to have this from the start.

    – Carl Colijn
    May 30 at 17:04















0














Modeling with proof assistant, such as TLA+.  You can model behaviors, protocols, algorithms, and it can search for deadlocks, race conditions, etc..  There are a number of guides on the web, such as https://learntla.com/introduction



The big cloud service providers use TLA+ to verify their systems — they've found very obscure bugs in lots of areas doing this, above and beyond mere testing. See http://cacm.acm.org/magazines/2015/4/184701-how-amazon-web-services-uses-formal-methods/fulltext






share|improve this answer























  • Oofff... I see you picked up my mention of "proving" there :) But it's more that I want a "good enough" estimate that the code is fine, not a formal bulletproof proof. TIA+ certainly is nice, but a bit overkill here; it would mean we'd have to write our code base again in pseudo code for the checker to check.

    – Carl Colijn
    May 30 at 16:51











  • @CarlColijn I was considering mention that the title of your question didn't match the body, because you aren't actually asking for a proof.

    – Trixie Wolf
    May 30 at 16:59











  • @TrixieWolf: good call -- I've updated the title to match. But I'm glad Erik added this answer; it's too late for my situation, but when I start a new project from scratch it might be useful to have this from the start.

    – Carl Colijn
    May 30 at 17:04













0












0








0







Modeling with proof assistant, such as TLA+.  You can model behaviors, protocols, algorithms, and it can search for deadlocks, race conditions, etc..  There are a number of guides on the web, such as https://learntla.com/introduction



The big cloud service providers use TLA+ to verify their systems — they've found very obscure bugs in lots of areas doing this, above and beyond mere testing. See http://cacm.acm.org/magazines/2015/4/184701-how-amazon-web-services-uses-formal-methods/fulltext






share|improve this answer













Modeling with proof assistant, such as TLA+.  You can model behaviors, protocols, algorithms, and it can search for deadlocks, race conditions, etc..  There are a number of guides on the web, such as https://learntla.com/introduction



The big cloud service providers use TLA+ to verify their systems — they've found very obscure bugs in lots of areas doing this, above and beyond mere testing. See http://cacm.acm.org/magazines/2015/4/184701-how-amazon-web-services-uses-formal-methods/fulltext







share|improve this answer












share|improve this answer



share|improve this answer










answered May 30 at 14:39









Erik EidtErik Eidt

25.4k43770




25.4k43770












  • Oofff... I see you picked up my mention of "proving" there :) But it's more that I want a "good enough" estimate that the code is fine, not a formal bulletproof proof. TIA+ certainly is nice, but a bit overkill here; it would mean we'd have to write our code base again in pseudo code for the checker to check.

    – Carl Colijn
    May 30 at 16:51











  • @CarlColijn I was considering mention that the title of your question didn't match the body, because you aren't actually asking for a proof.

    – Trixie Wolf
    May 30 at 16:59











  • @TrixieWolf: good call -- I've updated the title to match. But I'm glad Erik added this answer; it's too late for my situation, but when I start a new project from scratch it might be useful to have this from the start.

    – Carl Colijn
    May 30 at 17:04

















  • Oofff... I see you picked up my mention of "proving" there :) But it's more that I want a "good enough" estimate that the code is fine, not a formal bulletproof proof. TIA+ certainly is nice, but a bit overkill here; it would mean we'd have to write our code base again in pseudo code for the checker to check.

    – Carl Colijn
    May 30 at 16:51











  • @CarlColijn I was considering mention that the title of your question didn't match the body, because you aren't actually asking for a proof.

    – Trixie Wolf
    May 30 at 16:59











  • @TrixieWolf: good call -- I've updated the title to match. But I'm glad Erik added this answer; it's too late for my situation, but when I start a new project from scratch it might be useful to have this from the start.

    – Carl Colijn
    May 30 at 17:04
















Oofff... I see you picked up my mention of "proving" there :) But it's more that I want a "good enough" estimate that the code is fine, not a formal bulletproof proof. TIA+ certainly is nice, but a bit overkill here; it would mean we'd have to write our code base again in pseudo code for the checker to check.

– Carl Colijn
May 30 at 16:51





Oofff... I see you picked up my mention of "proving" there :) But it's more that I want a "good enough" estimate that the code is fine, not a formal bulletproof proof. TIA+ certainly is nice, but a bit overkill here; it would mean we'd have to write our code base again in pseudo code for the checker to check.

– Carl Colijn
May 30 at 16:51













@CarlColijn I was considering mention that the title of your question didn't match the body, because you aren't actually asking for a proof.

– Trixie Wolf
May 30 at 16:59





@CarlColijn I was considering mention that the title of your question didn't match the body, because you aren't actually asking for a proof.

– Trixie Wolf
May 30 at 16:59













@TrixieWolf: good call -- I've updated the title to match. But I'm glad Erik added this answer; it's too late for my situation, but when I start a new project from scratch it might be useful to have this from the start.

– Carl Colijn
May 30 at 17:04





@TrixieWolf: good call -- I've updated the title to match. But I'm glad Erik added this answer; it's too late for my situation, but when I start a new project from scratch it might be useful to have this from the start.

– Carl Colijn
May 30 at 17:04











0














You might find thread safety analysis annotations of interest. Provided you can structure your code so the model fits, it provides a cheap way to check that things are OK. (The checks are also in gcc.)






share|improve this answer























  • Thanks for that! There are some annotations that check for deadlock, but they all revolve around me specifying that a specific order of locking should be followed or that the caller may not hold certain locks. It seems this only checks for deadlock within a tread, and not so much between threads? In my case the problem is more that I have multiple processes all interacting in (somewhat) hard to determine ways, and they all need to access diverse data sources, sometimes more than one at a time.

    – Carl Colijn
    May 30 at 16:49











  • These locks are across threads (as they have to be), so when you annotate that a function/method can't be called when such a lock is held (or that a lock must be held when it's called) then that's a global assertion, not one within a thread. If you have a situation where a method sometimes needs a lock and sometimes doesn't then the annotations won't help. (If you're starting from scratch maybe that says your program structure needs changing, but I understand completely that for an existing codebase that might not be an option.)

    – Bruce Stephens
    May 30 at 17:23











  • The docu for EXCLUDES e.g. mentions This annotation is used to prevent deadlock. Many mutex implementations are not re-entrant, so deadlock can occur if the function acquires the mutex a second time. Re-entrancy is not really an issue in my case; it's more interprocess 2-lock-wrong-order deadlock I'm worried about. And indeed some functions need to lock 2 locks in succession, and locking is often only required on some code paths (e.g. syncing of shared data if needed and/or if it is ready).

    – Carl Colijn
    May 30 at 17:39











  • You're right, I think I confused myself. Generally, the annotations do require a particular (simple) structure of lock usage. If a program can be arranged to do that then the annotations provide a cheap way to check (and so maintain) correctness, but it sounds like your existing codebase (like many) may well not be suitable.

    – Bruce Stephens
    May 30 at 18:30















0














You might find thread safety analysis annotations of interest. Provided you can structure your code so the model fits, it provides a cheap way to check that things are OK. (The checks are also in gcc.)






share|improve this answer























  • Thanks for that! There are some annotations that check for deadlock, but they all revolve around me specifying that a specific order of locking should be followed or that the caller may not hold certain locks. It seems this only checks for deadlock within a tread, and not so much between threads? In my case the problem is more that I have multiple processes all interacting in (somewhat) hard to determine ways, and they all need to access diverse data sources, sometimes more than one at a time.

    – Carl Colijn
    May 30 at 16:49











  • These locks are across threads (as they have to be), so when you annotate that a function/method can't be called when such a lock is held (or that a lock must be held when it's called) then that's a global assertion, not one within a thread. If you have a situation where a method sometimes needs a lock and sometimes doesn't then the annotations won't help. (If you're starting from scratch maybe that says your program structure needs changing, but I understand completely that for an existing codebase that might not be an option.)

    – Bruce Stephens
    May 30 at 17:23











  • The docu for EXCLUDES e.g. mentions This annotation is used to prevent deadlock. Many mutex implementations are not re-entrant, so deadlock can occur if the function acquires the mutex a second time. Re-entrancy is not really an issue in my case; it's more interprocess 2-lock-wrong-order deadlock I'm worried about. And indeed some functions need to lock 2 locks in succession, and locking is often only required on some code paths (e.g. syncing of shared data if needed and/or if it is ready).

    – Carl Colijn
    May 30 at 17:39











  • You're right, I think I confused myself. Generally, the annotations do require a particular (simple) structure of lock usage. If a program can be arranged to do that then the annotations provide a cheap way to check (and so maintain) correctness, but it sounds like your existing codebase (like many) may well not be suitable.

    – Bruce Stephens
    May 30 at 18:30













0












0








0







You might find thread safety analysis annotations of interest. Provided you can structure your code so the model fits, it provides a cheap way to check that things are OK. (The checks are also in gcc.)






share|improve this answer













You might find thread safety analysis annotations of interest. Provided you can structure your code so the model fits, it provides a cheap way to check that things are OK. (The checks are also in gcc.)







share|improve this answer












share|improve this answer



share|improve this answer










answered May 30 at 15:36









Bruce StephensBruce Stephens

51533




51533












  • Thanks for that! There are some annotations that check for deadlock, but they all revolve around me specifying that a specific order of locking should be followed or that the caller may not hold certain locks. It seems this only checks for deadlock within a tread, and not so much between threads? In my case the problem is more that I have multiple processes all interacting in (somewhat) hard to determine ways, and they all need to access diverse data sources, sometimes more than one at a time.

    – Carl Colijn
    May 30 at 16:49











  • These locks are across threads (as they have to be), so when you annotate that a function/method can't be called when such a lock is held (or that a lock must be held when it's called) then that's a global assertion, not one within a thread. If you have a situation where a method sometimes needs a lock and sometimes doesn't then the annotations won't help. (If you're starting from scratch maybe that says your program structure needs changing, but I understand completely that for an existing codebase that might not be an option.)

    – Bruce Stephens
    May 30 at 17:23











  • The docu for EXCLUDES e.g. mentions This annotation is used to prevent deadlock. Many mutex implementations are not re-entrant, so deadlock can occur if the function acquires the mutex a second time. Re-entrancy is not really an issue in my case; it's more interprocess 2-lock-wrong-order deadlock I'm worried about. And indeed some functions need to lock 2 locks in succession, and locking is often only required on some code paths (e.g. syncing of shared data if needed and/or if it is ready).

    – Carl Colijn
    May 30 at 17:39











  • You're right, I think I confused myself. Generally, the annotations do require a particular (simple) structure of lock usage. If a program can be arranged to do that then the annotations provide a cheap way to check (and so maintain) correctness, but it sounds like your existing codebase (like many) may well not be suitable.

    – Bruce Stephens
    May 30 at 18:30

















  • Thanks for that! There are some annotations that check for deadlock, but they all revolve around me specifying that a specific order of locking should be followed or that the caller may not hold certain locks. It seems this only checks for deadlock within a tread, and not so much between threads? In my case the problem is more that I have multiple processes all interacting in (somewhat) hard to determine ways, and they all need to access diverse data sources, sometimes more than one at a time.

    – Carl Colijn
    May 30 at 16:49











  • These locks are across threads (as they have to be), so when you annotate that a function/method can't be called when such a lock is held (or that a lock must be held when it's called) then that's a global assertion, not one within a thread. If you have a situation where a method sometimes needs a lock and sometimes doesn't then the annotations won't help. (If you're starting from scratch maybe that says your program structure needs changing, but I understand completely that for an existing codebase that might not be an option.)

    – Bruce Stephens
    May 30 at 17:23











  • The docu for EXCLUDES e.g. mentions This annotation is used to prevent deadlock. Many mutex implementations are not re-entrant, so deadlock can occur if the function acquires the mutex a second time. Re-entrancy is not really an issue in my case; it's more interprocess 2-lock-wrong-order deadlock I'm worried about. And indeed some functions need to lock 2 locks in succession, and locking is often only required on some code paths (e.g. syncing of shared data if needed and/or if it is ready).

    – Carl Colijn
    May 30 at 17:39











  • You're right, I think I confused myself. Generally, the annotations do require a particular (simple) structure of lock usage. If a program can be arranged to do that then the annotations provide a cheap way to check (and so maintain) correctness, but it sounds like your existing codebase (like many) may well not be suitable.

    – Bruce Stephens
    May 30 at 18:30
















Thanks for that! There are some annotations that check for deadlock, but they all revolve around me specifying that a specific order of locking should be followed or that the caller may not hold certain locks. It seems this only checks for deadlock within a tread, and not so much between threads? In my case the problem is more that I have multiple processes all interacting in (somewhat) hard to determine ways, and they all need to access diverse data sources, sometimes more than one at a time.

– Carl Colijn
May 30 at 16:49





Thanks for that! There are some annotations that check for deadlock, but they all revolve around me specifying that a specific order of locking should be followed or that the caller may not hold certain locks. It seems this only checks for deadlock within a tread, and not so much between threads? In my case the problem is more that I have multiple processes all interacting in (somewhat) hard to determine ways, and they all need to access diverse data sources, sometimes more than one at a time.

– Carl Colijn
May 30 at 16:49













These locks are across threads (as they have to be), so when you annotate that a function/method can't be called when such a lock is held (or that a lock must be held when it's called) then that's a global assertion, not one within a thread. If you have a situation where a method sometimes needs a lock and sometimes doesn't then the annotations won't help. (If you're starting from scratch maybe that says your program structure needs changing, but I understand completely that for an existing codebase that might not be an option.)

– Bruce Stephens
May 30 at 17:23





These locks are across threads (as they have to be), so when you annotate that a function/method can't be called when such a lock is held (or that a lock must be held when it's called) then that's a global assertion, not one within a thread. If you have a situation where a method sometimes needs a lock and sometimes doesn't then the annotations won't help. (If you're starting from scratch maybe that says your program structure needs changing, but I understand completely that for an existing codebase that might not be an option.)

– Bruce Stephens
May 30 at 17:23













The docu for EXCLUDES e.g. mentions This annotation is used to prevent deadlock. Many mutex implementations are not re-entrant, so deadlock can occur if the function acquires the mutex a second time. Re-entrancy is not really an issue in my case; it's more interprocess 2-lock-wrong-order deadlock I'm worried about. And indeed some functions need to lock 2 locks in succession, and locking is often only required on some code paths (e.g. syncing of shared data if needed and/or if it is ready).

– Carl Colijn
May 30 at 17:39





The docu for EXCLUDES e.g. mentions This annotation is used to prevent deadlock. Many mutex implementations are not re-entrant, so deadlock can occur if the function acquires the mutex a second time. Re-entrancy is not really an issue in my case; it's more interprocess 2-lock-wrong-order deadlock I'm worried about. And indeed some functions need to lock 2 locks in succession, and locking is often only required on some code paths (e.g. syncing of shared data if needed and/or if it is ready).

– Carl Colijn
May 30 at 17:39













You're right, I think I confused myself. Generally, the annotations do require a particular (simple) structure of lock usage. If a program can be arranged to do that then the annotations provide a cheap way to check (and so maintain) correctness, but it sounds like your existing codebase (like many) may well not be suitable.

– Bruce Stephens
May 30 at 18:30





You're right, I think I confused myself. Generally, the annotations do require a particular (simple) structure of lock usage. If a program can be arranged to do that then the annotations provide a cheap way to check (and so maintain) correctness, but it sounds like your existing codebase (like many) may well not be suitable.

– Bruce Stephens
May 30 at 18:30

















draft saved

draft discarded
















































Thanks for contributing an answer to Software Engineering Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsoftwareengineering.stackexchange.com%2fquestions%2f392613%2fhow-to-check-whether-existing-code-base-can-deadlock%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Wikipedia:Vital articles Мазмуну Biography - Өмүр баян Philosophy and psychology - Философия жана психология Religion - Дин Social sciences - Коомдук илимдер Language and literature - Тил жана адабият Science - Илим Technology - Технология Arts and recreation - Искусство жана эс алуу History and geography - Тарых жана география Навигация менюсу

Bruxelas-Capital Índice Historia | Composición | Situación lingüística | Clima | Cidades irmandadas | Notas | Véxase tamén | Menú de navegacióneO uso das linguas en Bruxelas e a situación do neerlandés"Rexión de Bruxelas Capital"o orixinalSitio da rexiónPáxina de Bruselas no sitio da Oficina de Promoción Turística de Valonia e BruxelasMapa Interactivo da Rexión de Bruxelas-CapitaleeWorldCat332144929079854441105155190212ID28008674080552-90000 0001 0666 3698n94104302ID540940339365017018237

What should I write in an apology letter, since I have decided not to join a company after accepting an offer letterShould I keep looking after accepting a job offer?What should I do when I've been verbally told I would get an offer letter, but still haven't gotten one after 4 weeks?Do I accept an offer from a company that I am not likely to join?New job hasn't confirmed starting date and I want to give current employer as much notice as possibleHow should I address my manager in my resignation letter?HR delayed background verification, now jobless as resignedNo email communication after accepting a formal written offer. How should I phrase the call?What should I do if after receiving a verbal offer letter I am informed that my written job offer is put on hold due to some internal issues?Should I inform the current employer that I am about to resign within 1-2 weeks since I have signed the offer letter and waiting for visa?What company will do, if I send their offer letter to another company