1

In a CMD batch trigger script I use a cleartool command to write activity information to a file:

cleartool lsactivity -long  %CLEARCASE_ACTIVITY%>>C:\folder\activityinfo.txt

This works almost every time but occasionally for a reason unknown to me the cleartool command does not write the information correctly to file resulting in a 0KB output file that I cannot delete. Whatsmore it blocks the trigger from running properly in successive attempts.

I wrote some code that checks to see whether the output file is 0KB in size but that doesn't work because the cleartool command seems to keep the file open even though it isn't writing to it. It's so strange!

After a number of hours the trigger works again because I assume the locked process unlocks.

Is there any way to avoid this phenomenon?

Regards,

Andrew

Andrew
  • 293
  • 6
  • 14

1 Answers1

1

What I have seen done is to write to a different file for each occurrence of the cleartool lsactivity, and to aggregate those files into one, once the full process is done.

One technique, for instance, would be to use the date in the filename, as in "Batch command date and time in file name".

set hr=%time:~0,2%
if "%hr:~0,1%" equ " " set hr=0%hr:~1,1%
cleartool lsactivity -long  %CLEARCASE_ACTIVITY% > C:\folder\activityinfo_%date:~-4,4%%date:~-10,2%%date:~-7,2%_%hr%%time:~3,2%%time:~6,2%.txt

Note that I only use '>', not '>>', since the file 'activityinfo_YYYYMMDD_HH:MM:SS' is always different at each cleartool invocation.

The OP Andrew comments:

I have set the cleartool lsactivity command to write to (or overwrite) the files with '>' instead of '>>' (append).

Also, since sometimes there are hiccups in the system, I set a sleep of 5 seconds after the command just in case there is a delay in writing to and creating the file

From the comments below, though, adding a sleep at the beginning of the script is also recommended:

I have added an initial sleep of 20 seconds at the beginning of the script and I haven't had any problems yet.

Community
  • 1
  • 1
VonC
  • 1,042,979
  • 435
  • 3,649
  • 4,283
  • Thanks again for your valuable input. I have already set the script so that it adds a unique ID at the end of the filename. In my case I am using the activity name since I need to access the file at a later stage. This `cleartool` command is only called once per script execution so I don't need to aggregate more files into one. I am curious about the difference between `>` and `>>`. What would using `>` do? – Andrew Feb 21 '13 at 10:13
  • '`>`' forces the creation of the file or overriding, if it existed. '`>>`' is for *appending* content to an existing file. In your case, I recommend '`>`'. – VonC Feb 21 '13 at 10:17
  • Thanks for the suggestion. I have set the `cleartool lsactivity` command to write to (or overwrite) the files with '`>`' instead of '`>>`'. Also, since sometimes there are hiccups in the system I set a sleep of 5 seconds after the command just in case there is a delay in writing to and creating the file. – Andrew Feb 21 '13 at 10:55
  • @Andrew sounds good. I have included your last comment in the answer for more visibility. – VonC Feb 21 '13 at 11:26
  • unfortunately this did not solve the problem. There are two distinct `cleartool lsactivity` calls in the script and they each write to a separate file with a unique name. The 0KB lock file has appeared again. Maybe it is because two people caused the trigger to file and have the cleartool commands overlap? I have no idea, but it is definitely a blocking point. Do you have any more ideas? I'll try setting the sleep for longer just in case... – Andrew Feb 21 '13 at 19:07
  • @Andrew more idea? Try redirect stderr as well as stdout in your lsact call: as in http://stackoverflow.com/questions/482678/how-to-capture-stderr-on-windows-dos. That way, if those 0kb are there because of some error message, that error message will be in the file (which might not be 0Kb anymore, but your can grep -i error, and if you don't have grep, just get GoW: https://github.com/bmatzelle/gow/wiki) – VonC Feb 21 '13 at 19:13
  • thanks for the input. Are these UNIX commands? I am running my script on Win Server 2003... – Andrew Feb 21 '13 at 19:33
  • @Andrew they are Unix command compiled for Windows. You will be fine. – VonC Feb 21 '13 at 20:07
  • Thanks. My line now looks like `cleartool lsactivity -long %CLEARCASE_ACTIVITY%>C:\folder\activityinfo.txt 2>&1` so if the script gives me a locked 0KB file and doesn't write the activity information it will hopefully tell me something. – Andrew Feb 21 '13 at 20:12
  • Just to keep you updated, the 2>&1 doesn't give me any lines within the blank 0KB file. It just sits there and blocks everything. Are there any suggested ways for exporting cleartool command results to file other than using `>`? – Andrew Feb 22 '13 at 19:39
  • @Andrew no way that I am aware of. Could you determine which activity produces those 0kb file? Remove the `2>&1`, since it blocks everything. And add a `echo %CLEARCASE_ACTIVITY% > myfile.act`, before the `cleartool lsactivity -long %CLEARCASE_ACTIVITY% > myfile` (with `myfile=C:\folder\activityinfo_%date:~-4,4%%date:~-10,2%%date:~-7,2%_%hr%%time:~3,2%%time:~6,2%.txt`, of course). Note the `.act` in order to generate two files (one with the activity name, the other with the -- potentially empty -- output) – VonC Feb 22 '13 at 20:05
  • I've removed `2>&1` and added the extra write to file. In order to manage concurrent triggers, I have a subfolder `C:\folder\%CLEARCASE_ACTIVITY%` for each activity under which there will be two files: `myfile_%CLEARCASE_ACTIVITY%_%TIMESTAMP%.tmp` and `myfile_%CLEARCASE_ACTIVITY%_%TIMESTAMP%.txt`. The script will go on to analyse the second (not 0KB) file. Hopefully this will catch the hiccup. Thank you for your interest. – Andrew Feb 23 '13 at 09:23
  • OK, so the strange think is that it still gives me a 0KB file, but not on the first tmp file that is created! It seems that the problem is caused on the file that the script has to access afterwards! – Andrew Feb 23 '13 at 09:57
  • @Andrew ok, but can you deduce what activity is involved at that moment (that was the all point of my last suggestion). Maybe it is corrupted? – VonC Feb 23 '13 at 10:36
  • It happens at random with various activities. An activity that works now will freeze later and will then start working again. I don't think that the problem is linked to a specific activity, but rather when the output file has to be re-opened by the script. Also, it doesn't happen all the time, just occasionally. – Andrew Feb 23 '13 at 11:10
  • @Andrew ok so another approach is in order. How about writing just the name of the activities? (and then later do the `ct lsact -l`)? Just to see if that smaller operation has the same issues? – VonC Feb 23 '13 at 12:01
  • what do you mean by writing just the name of the activities? – Andrew Feb 23 '13 at 12:25
  • @Andrew just an `echo %CLEARCASE_ACTIVITY% > myfile.act` and that is it. Just to see if a very short command have the same issue than the `ct lsact -l` – VonC Feb 23 '13 at 12:27
  • I have a log that is continuously being written to during script execution and it stores various information, including activity, stream, headline, and view information retrieved from the activity. When I was calling `ct lsact -l` twice (1 for tmp and 1 to analyse), only the one used for analysis seemed to have the problem even though the tmp file was written first. I tried to copy and rename the tmp file instead of call `ct lsact -l` again, but then the copied file started having problems. It seems to cause issues when it is accessed... I am still running tests. – Andrew Feb 23 '13 at 12:37
  • It seems that this problem will randomly affect any of the files. It just blocked with the first tmp file. Could it be that the previous process was not completed properly? For example, if the script runs once everything seems to be fine. When it is run multiple times, it will block on the third or fourth time (or even later). Is there a way of knowing or forcing the script or trigger to close properly and complete all activities? – Andrew Feb 23 '13 at 13:03
  • @Andrew that looks like a file system synchronization problem indeed. Is it for a postop trigger on a CCRC web (ie snapshot) view, as in your previous question (http://stackoverflow.com/q/14831245/6309)? – VonC Feb 23 '13 at 13:19
  • This is the same script, just further down the line! It's a postop trigger on the checkin operation. To follow up on my last note, I have added an initial sleep of 20 seconds at the beginning of the script and I haven't had any problems yet. Normally I would see problems after a half hour or so, but I haven't had any issues yet. I'll continue to monitor. I have taken out the tmp files and the call happens just once. – Andrew Feb 23 '13 at 15:43
  • I have been monitoring the initial sleep of 20 seconds for a few days now and I haven't run into the same problem yet. I think this may be the solution! – Andrew Feb 26 '13 at 09:18
  • @Andrew Sounds great. I have included in my answer your recommendation about a sleep at the *beginning* of the script. That way, it is more visible than buried in the comments. – VonC Feb 26 '13 at 09:23
  • The problem showed up again. I don't think it has anything to do with ClearCase at this point since it only causes the problem once the file is being accessed (this was discovered byt writing the temp files and performing file copies). I suppose this thread can be abandoned and I might open another request to include batch experts. Thank you for your interest. – Andrew Feb 26 '13 at 11:13
  • I have finally identified the issue and opened [this thread](http://stackoverflow.com/questions/15110654/what-could-cause-cleartool-exe-to-crash-when-being-executed-using-cmd-batch-scri) about it. It is cleartool that crashes and hangs the script execution. – Andrew Feb 27 '13 at 11:16