|
Submit a new Tip
|
|
Top
Stop simultaneous execution of the same script
|
Level: Advanced |
Submitted by:
dmcvey@swbell.net
|
URL:
none
|
My_pid=$$
print $My_pid >> temp.file
read input_pid < temp.file
if [[ "$input_pid" == "$My_pid" ]] then
(allow the script to continue)
else
(exit the script, My_pid was not 1st)
fi
This guarantees that the first instance of the script submitted will run
and any other occurances of the same script will exit. Remove temp.file
before the script ends and I suggest using a Trap. Better than trying to
deal with ps/grep and it make a nice little function.
|
Top
Stop simultaneous execution of the same script (2)
|
Level: Advanced |
Submitted by:
???
|
URL:
none
|
If only one instance of a script should run at a given time, a lock
directory could be used to ensure mutual exclusion:
lockname=/tmp/myscript.lock
if mkdir "$lockname"
then
echo >&2 "aquired lock; continuing"
# Remove lock directory when script terminates
trap 'rmdir "$lockname"' 0
trap "exit 2" 1 2 3 15
else
echo >&2 "lock already held"
echo >&" "another script instance running?"
exit 1
fi
This can be extended to e.g. write the current process id to a file in the
lock directory. This way another instance can check if the process that
aquired the lock is still running.
|
Top
Backup Log files without renaming and interrupting service
|
Level: Advanced |
Submitted by:
dmcvey11@home.com
|
URL:
none
|
Problem: Periodically your log files become too large and need to
be backed up and compressed. Renaming the file with mv will mess up any
links to the file and could disrupt your servers using the logs. If your
servers are expected to be running 24X7 renaming is not an option.
Solution:
suffix=`date +%Y%m%d%H%M%S`.bak
newFileName=${YourLogName.log}.${suffix}
cp -p YourLogName.log $newFileName
cp /dev/null YourLogName.log
compress $newFileName
A copy of your log is made and compressed and the log file is initialized
(emptied) and ready for more messages. Processing can continue without
interruption. I suppose there is a possibility of a few messages being
dropped (what we don't see won't be missed) , so do this during times of
slow usage with a crontab entry.
|
Top
saving stdout, stderr and both into 3 separate files
|
Level: Advanced |
Submitted by:
Ben Altman
|
URL:
none
|
# Sometimes it is useful to not only know what has gone to stdout and
stderr but also where they occurred with respect to each other:
# Allow stderr to go to err.txt, stdout to out.txt and both to mix.txt
#
((./program 2>&1 1>&3 | tee ~/err.txt) 3>&1 1>&2 | tee ~/out.txt) >
~/mix.txt 2>&1
|
|
|