#6258 closed defect (fixed)
ENHANCEMENT: IndexRegen.sh should keep itself from running multiple instances
Reported by: | danielluke (Daniel J. Luke) | Owned by: | jmpalacios (Juan Manuel Palacios) |
---|---|---|---|
Priority: | Normal | Milestone: | |
Component: | base | Version: | 1.0 |
Keywords: | Cc: | ||
Port: |
Description (last modified by ryandesign (Ryan Carsten Schmidt))
In times like now, when the darwinports cvs server is wonky, we don't want (or at least _I_ don't want) a bunch of IndexRegen script executions queued up.
A simple way of doing this is to have the script create and remove a lockfile. It's not perfect (as there's a potential race condition), but works as long as the script is executed in the normal fashion (run out of cron twice a day or so).
Anyway, here's the diff:
--- IndexRegen.sh 2005-12-06 15:47:42.000000000 -0500 +++ /Users/dluke/Applications/IndexRegen.sh 2005-12-11 14:10:49.000000000 -0500 @@ -9,18 +9,19 @@ #### # Configuration +LOCKFILE=/tmp/.dp_index_regen.lock # ROOT directory, where everything is. This must exist. @@ -47,6 +48,13 @@ # The date. DATE=$(date +'%A %Y-%m-%d at %H:%M:%S') +if [ ! -e $LOCKFILE ]; then + touch $LOCKFILE +else + echo "Index Regen lockfile found, is another index regen running?" + exit 1 +fi + # Create the SSH wrapper if it doesn't exist (comment this for -d /Volumes...) if [ ! -e $SSH_KEY ]; then echo "Key doesn't exist. The script is configured to find the SSH key at:" @@ -140,3 +148,4 @@ rm -f $COMMIT_MSG $FAILURE_LOG fi +rm -f $LOCKFILE
Attachments (2)
Change History (8)
comment:1 Changed 19 years ago by jmpp@…
Owner: | changed from darwinports-bugs@… to jmpp@… |
---|
comment:2 Changed 19 years ago by danielluke (Daniel J. Luke)
(In reply to comment #1)
Would be a better approach, as this assuress us we always start with fresh logs (removing the logs inside the conditional could cause a particular run to pick up error messages from a previous one). However, this is my theory from brainstorming over this, you mind confirming or rebating it for me?
This potential issue is orthogonal to my patch. Prior to my patch, the script would only remove the failure log if it sent there wasn't a failure. The script also truncates the failure log at several steps (so that if there is a failure, only the output from the failed step is in the log). So, I don't think that your rm change is necessary.
My patch just wraps the entire script with the lock (try to create it and die if it exists, otherwise run the rest of the script and as the last action remove the lock).
comment:3 Changed 19 years ago by danielluke (Daniel J. Luke)
attachments.isobsolete: | 0 → 1 |
---|
(From update of attachment 5315) sorry, wrong bug
comment:4 Changed 19 years ago by jmpp@…
(In reply to comment #2)
This potential issue is orthogonal to my patch.
That I know, I was only leveraging this opportunity to fix something I thought was a bug ;-)
The script also truncates the failure log at several steps (so that if there is a failure, only the output from the failed step is in the log). So, I don't think that your
rm
change is necessary.
But you are right, I didn't pay close enough attention to realize the log is overwritten and/or truncated with a single ">" each time something is outputted to it. Failures append with ">>" only *after* the initial truncation (this is exactly where my logic failed)... so I stand corrected, my "fix" is not needed.
Other than that, you mind uploading this patch as an attachment? Much easier to apply like that. Tks!
-jmpp
PS: In any case, committing is gonna have to wait, CVS is fubar'd at the moment!
Changed 19 years ago by danielluke (Daniel J. Luke)
Attachment: | index_regen.patch added |
---|
Patch to use lockfile
comment:5 Changed 19 years ago by jmpp@…
Resolution: | → fixed |
---|---|
Status: | new → closed |
Committed, thanks!
-jmpp
comment:6 Changed 9 years ago by ryandesign (Ryan Carsten Schmidt)
Description: | modified (diff) |
---|
Hey Daniel! Good idea about the lock file, I like it. However, could you please double check a small modification for me? Taking from your diff, I think changing the existing last block from:
To:
Would be a better approach, as this assuress us we always start with fresh logs (removing the logs inside the conditional could cause a particular run to pick up error messages from a previous one). However, this is my theory from brainstorming over this, you mind confirming or rebating it for me?
Tks!
-jmpp