Wednesday, May 27, 2009

What is ClearCase ?

ClearCase is a software configuration management (SCM) system that helps automate the tasks required to write, release, and maintain high-quality software. ClearCase is useful to all members of a project, including software engineers, technical writers, project leaders, release engineers, and quality engineers. The ClearCase tool provides version control, workspace management, process control and build management. The version control mechanism provides versioning of all file types and folders, is highly secure, records and reports actions, provides unlimited branching and merging, automatically merges non-conflicting changes and provides graphical merge and compare tools. Workspace management provides different models for presenting the versions available to be worked on depending on the development process that has been incorporated. Build management applies to C and C++ development. When builds are done, a Bill of Materials is automatically created that can be used at any time in the future to exactly reproduce that build. Derived objects are also versioned behind the scenes in ClearCase to help speed up compile times when builds on certain objects are unnecessary. Process control is provided by triggers that fire when certain actions are taken and by the unlimited use of metadata with the versions. ClearCase also provides for cross-platform development, replication between sites (with Multi-Site) and is highly scalable in the enterprise environment.

Frequently Asked Questions

1.1.1 How to remove (specify) a file with a name beginning in "-"?

Say "./-filename".

1.1.2 setview in scripts doesn't work

Running "setview" without the "-exec" flag starts a new shell.

Generally, using "setview -exec" is a deprecated way of doing things. It will not work on NT, and it will not work in snapshot views. Instead, one should use view extended path names.

When writing scripts, it is a good idea to normalize path names to the view root, which can be obtained by running "cleartool pwv -root". Note that this returns an empty string when run in a setview context. Therefore, you can safely append an absolute pathname beginning wiht the vob tag to the value returned by "cleartool pwv -root".

1.1.3 Get "Event not found." error when running the cleartool find command. (Frederick Sena )

For example:

 % ct find . -version "!version(/main/0)" -print

This is a csh'ism. The "!" is interpreted as an event reference, and this is done prior to resolving the quotes. Use "\!" as in:

 % ct find . -version "\!version(/main/0)" -print

1.1.4 When I try to unmount a vob, I get a "Device busy" error and the vob does not unmount. (Frederick Sena )

Make sure no process has its working directory set to a path beginning with the vob's vob tag. One can use fuser(1) to determine which processes are using that file system.

More specifically, go to /view (view root of the ClearCase Unix Server) and type 'fuser -cu'. If there are entries, try 'fuser -ck' (as root).

It has been observed that this isn't enough in some cases. In extreme cases, only a reboot will clear out the locks. A classic one is when /data (where you find 'clearcase/vobstore') is a link to a remote file system. If that file system is removed (typically, during DRP - Disaster Recovery Plan -) whereas ClearCase server is still up... you can only reboot (the mvfs module will deny any attempt to unmount itself: device busy)

1.1.5 In which order should I upgrade my servers and clients?

Servers first, clients later. Always refer to the release notes for details on any particular upgrade.

1.1.6 When shutting down Clearcase, I get "device busy"...

See 1.1.4.

1.1.7 How can I support more users with my limited set of licenses?

First, start by monitoring your license usage over the day. There are some nice packages that allow you to do this, For example Ed Finch's ClGraph.

The next thing is to reduce the license timeout to the minimum allowed: 30 minutes. Do this by adding a line saying: -timeout 30 into your /var/adm/atria/license.db file. You may also want to give a list of priority users by adding lines saying: -user userid.

If you still run out of licenses (and you can't afford to buy more), you can ask people to say clearlicense -release, but keep in mind you can only do this a limited number of times per day (approx. twice the number of licenses).

Touching the /var/adm/atria/license.db file releases all licenses at once, but again this can only be done 12 times per day. I use a crontab script similar to the one below, touching the file at a frequency that roughly matches my usage pattern:

5 9,10,11,1,2,3,4,5,6 * * * /bin/touch /var/adm/atria/license.db

Yet another way to reduce license usage is to encourage developers to use snapshot views and ensure that your change process knows how to deal with snapshot views, as there are a couple of pitfalls. Developers usually need little encouragement, as they will gladly sacrifice dynamic views for faster build performance, especially on NT systems.

1.2 Elements

1.2.1 How to undo a mkelem if rmelem is disabled?

Most sites disable the rmelem command, therefore you cannot undo a mkelem directly. Instead, you should use the rmname (rm for short) command.

Suppose, for example, you accidentally created the subdir element as a file element instead of a directory element. Use this procedure to correct the problem:

% ct mkelem subdir   <== error!
Created element "subdir" (type "compressed_file").
Checked out "subdir" from version "/main/0".
 
% ct ci -identical -nc subdir
Checked in "subdir".
 
% ct rm subdir
cleartool: Warning: Object "subdir" no longer referenced.
cleartool: Warning: Moving object to vob lost+found directory as "subdir.9e881d0d390711d5b3ee000180a933fe".
Removed "subdir".
 
% ct mkdir subdir
Created directory element "subdir".
Checked out "subdir" from version "/main/0".
 
%
 

The checkin is mainly for cosmetic reasons and to avoid confusion if this procedure is used in a snapshot view. In the end, the element isn't removed but just relocated into the trash bin. The ClearCase administrator will empty out the trash once in a while.

This procedure will not work as described if you don't notice the error immediatly and check in the containing directory. If you later check out the directory and remove the element, you will notice that it doesn't get relocated into lost+found, since the previous version of the directory still has a reference to that element. If you then attempt to create a directory element with the same name, the evil twin trigger will get in your way. You can fool the trigger, though, by first creating a directory element with a different name and then renaming it using the ct mv command.

1.2.2 How to undo an rmname?

The important thing to remember is that renames, additions and removals of elements from a directory are all harmless and recoverable, so don't panic!

It helps to visualize additions, removals and renames as editing the containing directory. It's very much like adding, removing or changing lines in a file, and the recovery procedures are similar.

The easiest thing to do is to simply cancel the checkout of the directory. This will restore the previous version of the directory, and the removed elements will reappear, as if by magic:

% ct co -nc .
Checked out "." from version "/main/234".
 
% ct rm file  <=== OOOPS!!
Removed "file".
 
% ct unco .
Checkout cancelled for ".".
 
% ls file
file
 
%

If you don't notice the error immediately and have already checked in your directory, you can take advantage of the evil twin trigger and allow it to help you. Simply attempt to re-mkelem the deleted element and follow the instructions:

% ct co -nc .
Checked out "." from version "/main/234".
 
% ct mkelem file
ERROR:  An element named "file" already exists in
        in some other version of ".":
 
Instead of creating a new element, you probably want to
create a hard link to the existing element, like so:
 
cleartool ln .@@/main/LATEST/file .
 
% ct ln .@@/main/LATEST/file .
Link created: "./file".
 
%

In general, directories don't contain elements but contain named links to elements. Removing an element doesn't destroy the element, it just removes the link. It is therefore almost trivial to resurrect a removed element.

One consequence is that removing directories and removing files are really the same operation, and the recovery procedure is identical. In particular, if you accidentally removed a whole tree, you only need to recreate the link to the topmost directory of that tree.

1.2.3 I do a findmerge or a checkout and see: "Not a VOB object", but I know that the file is a VOB object.

You are probably dealing with an eclipsed file. Run "cleartool ls" on the file, and if you see "[eclipsed]", simply rename the file out of the way using a plain ordinary "mv" and retry the operation.

The most common way this happens is when developers copy files from other views. Don't do that. Merge the containing directories instead.

1.2.4 How can I locate Hard Links?

A more precise formulation would be: "How do I locate elements that are linked into two or more different directory elements?". There is no really good method to do this except by exhaustive search, for example by using this technique:

Run this command:

 % find . -print | record_inodes

where record_inodes is this little perl script:

#!/bin/perl
 
%links = ();  # hash indexed by inodes containing an array of
              # directory locations
 
%twice = ();  # hash indexed by those inodes referenced by two
              # or more directories
while (<>) {
  chomp;
  next if $_ eq '.';           # skip top level
  next if -l $_;               # skip over symlinks
 
  ($dir = $_) =~ s,/[^/]*$,,;  # determine directory of element
 
  # get inode of version 0, if it exists. If not, use
  # just stat the file itself. This way, this script can
  # be used outside of clearcase too.
  $zero_version = $_.'@@/main/0';
  ($dev, $inode) = (-e $zero_version
                    ? stat($zero_version)
                    : stat($_));
 
  if (exists($links{$inode})) {
    # register new location and register inode as "interesting"
    push(@{$links{$inode}}, $_);
    $twice{$inode}++;
  } else {
    # initialize array of locations
    $links{$inode} = [ $_ ];
  }
}
 
if (%twice) {
  print "------------------------------------------------------------\n";
  for $inode (keys(%twice)) {
    print(map("  $_\n", @{$links{$inode}}));
    print "------------------------------------------------------------\n";
  }
} else {
  print "No element linked to multiple directories found.\n";
}

1.2.5 RCS Keyword expansion?

This is probably the single most often asked question. The simple answer is that ClearCase doesn't support RCS keyword expansion, mainly for one reason: it would break merges.

Since two different versions would always have a different RCS keyword expansion at the same location within the file, any merge between those two versions would invariably cause a conflict that couldn't be resolved automatically.

In general, "inband info" (i.e. storing information about an object within an object) is a bad idea, since care must be taken not to confuse any random string with the actual metadata. A vivid demonstration of the danger can be found when attempting to check into RCS files that explain how RCS keywords work.

Instead of storing metadata within the data, one should take advantage of ClearCase's extensive metadata types (attributes, hyperlinks, comments etc...).

The obvious follow-up question then is: "How do I access the metadata when I can't connect to ClearCase?" There are really two cases here:

· Version info on the production system;

· Version info in an offline development source tree, for example a disconnected snapshot view.

The correct solution to the first case is to insert the RCS keywords at build or packaging time. There are tools to insert keywords into binaries, for example the contributed package T0039 (ccwhat).

The second case is the best argument for implementing RCS keywords, and if it's truely important to your environment, read on...

There are ways to make RCS keywords work within ClearCase, but they are by no means trivial. The best method so far is to use the contributed package T0027, which contains both a trigger and a type manager. The trigger does the actual work of substituting RCS keywords, and the type manager avoids the merge conflicts by interposing itself between the file and the real type manager, removing all RCS keywords.

Note that even with this solution, your branching strategy should be set up to deal with the case described in the diagram on the right. Assume that the blue developer delivers a change (1) of some file into the red delivery branch. He then never touches that file again. The green developer then makes another change and delivers it (2). The blue developer, having done changes in some other files wants to sync up and executes a findmerge from the red branch, which will cause green's change to be merged over as a copy merge (3). Now, the blue developer wants to deliver his other changes, but since the merge in (3) caused the RCS keywords to change, findmerge will think that blue modified the file even though he didn't and cause (4) to happen, which is somewhat bewildering.

The reason this happens is because the findmerge algorithm has a case where the actual file content is compared. Unfortunately, findmerge does not use the type manager's compare function, but something hard coded and will think the files are different even though the only difference is in the keywords.

There are two possible workarounds:

· delete or rename the development branches after delivery. This is a good thing to do in general, since it will keep the version tree looking clean, especially if the development branch name is to be re-used for the next change;

· have the RCS keywords pre-checkin trigger look for a merge hyperlink, test if the merge was a copy merge and if yes, don't modify the keywords.

1.2.6 How to perform mass checkouts?

On UNIX:

 % cleartool find path -print | awk '{print "co -nc \""$0"\""}' | cleartool

Most often, this is done in the context of upgrading third party software or doing some other form of mass checkin. Recently, Rational came up with clearfsimport, a tool that will apparently do the right thing. Stay tuned...

1.2.7 How to perform mass checkins?

On Unix:

 % cleartool lsco -s -cview -me -avobs -fmt 'ci -nc "%n"\n' | cleartool

The commands above will fail if no changes have been made. You can either insert the -identical flag or re-run the above once more, this time canclling the checkouts with unco -rm.

Most often, this is done in the context of upgrading third party software or doing some other form of mass checkin. Recently, Rational came up with clearfsimport, a tool that will apparently do the right thing. Stay tuned...

1.2.8 what is an evil twin?

An evil twin is two links with the same name, each pointing to a different element, in two different versions of the same directory element.

The reason they are evil is because they create the appearance of a directory containing the same file on two different branches, when in fact they contain two different files with the same name. If this situation isn't detected, then one can very well end up with two change histories of something that everybody thought of being the same file, only to get a rude surprise at merge time. There is essentially no good way to merge the two change histories.

Evil twins get created when two developers concurrently add the same file to source control without coordination, or if one developer copies files from another developer's view and puts those files under source control.

Copying files around should be discouraged, but the best way to avoid evil twins is to create a pre-mkelem trigger that will search through some or all versions of the directory containing the new element, verifying that no link with the same name exists.

The following perl code implements this trigger and prints out a warning together with the suggested link command that will link the existing element into the version of the directory where the newly created element would have ended uup.

The trigger will also do a sanity check on the file name itself, catching most unintentional element creation attempts.

The trigger relies on the excellent ClearCase::ClearPrompt and ClearCase::Argv modules, available at a CPAN site near you.

#!/bin/perl
 
use ClearCase::ClearPrompt qw(:all);
use ClearCase::Argv qw(ctsystem ctexec ctqx);
 
# Deal with the possibility of no display on Unix.
BEGIN { $ENV{ATRIA_FORCE_GUI} = $^O =~ /win32/i ? 1 : $ENV{DISPLAY} }
 
#--------------------------------------------------------------------
# Debugging aid. Overloading the semantics of the standard
# CLEARCASE_TRACE_TRIGGERS EV: if it's set to -2 we dump
# the runtime environment into a file in the current dir.
#--------------------------------------------------------------------
if (int($ENV{CLEARCASE_TRACE_TRIGGERS}) <>
        $ENV{CLEARCASE_TRACE_TRIGGERS} & 0x2) {
   open (EV, ">chk_dup_elems_env.txt");
   print EV "$x=$y\n" while (($x,$y) = each %ENV);
   close EV;
}
 
#--------------------------------------------------------------------
# See if the user wants to suppress this trigger's actions:
#--------------------------------------------------------------------
exit 0 if $ENV{CCASE_NO_CHK_DUP_ELEMS};
 
#--------------------------------------------------------------------
# Figure out the null device.
#--------------------------------------------------------------------
$null = $^O =~ /win32/i ? 'NUL' : '/dev/null';
 
#--------------------------------------------------------------------
# Catch some really bad pathnames - we can't catch everything, but
# let's try to be good.
#--------------------------------------------------------------------
 
$elname = $ENV{CLEARCASE_PN};
$elname =~ s,\\,/,g if $ENV{OS} eq 'Windows_NT';
$elname =~ s,/*$,,;
 
if ($elname =~ /[\"\(\)\{\}\[\]\\\'\^~\!\#]/ || # illegal characters;
    $elname =~ /\s+\// ||                       # leading or trailing
    $elname =~ /\/\s+/ ||                       # spaces at slashes,
    $elname =~ /^\s+/ ||                        # or at the beginning
    $elname =~ /\s+$/ ||                        # or at the end;
    $elname =~ /\.$/)                           # trailing dot.
{
    # substitute offenders with X
    $elname =~ s/[\"\(\)\{\}\[\]\\\'\^~\!\#]/X/g;
    $elname =~ s/\s+\//X\// ||
    $elname =~ s/\/\s+/\/X/ ||
    $elname =~ s/^\s+/X/;
    $elname =~ s/\s+$/X/;
 
    $prompt = <
 
ERROR:  The new element name contains characters
        that will cause problems in many scripts.
 
Please do not use quoting characters within filenames.
Also, please do not end a filename with a dot, as NT
cannot deal with this. Finally, do not use leading
or trailing spaces. Your name was (X = illegal):
 
   $elname
 
EOT
    if ($ENV{CCASE_NO_CLEARPROMPT}) {
      print STDERR $prompt;
    } else {
      clearprompt(qw(proceed -type error -default abort -mask abort -promt),
        $prompt);
    }
    exit 1;
}
 
$dirname =  $elname;
$elname  =~ s,.*/,,;
$dirname =~ s,[^/]*$,.,;
 
@vlist = grep(!m,/\d+$,, ctqx(qw(lsvtree -short), $dirname));
 
# OK, now look in each branch of this directory for the file element to
# be created; display info on error.
 
$found = '';
%seen = ();
unless (-d $dirname.$ENV{CLEARCASE_XN_SFX}.'/main') {
  # snapshot view
SNAP_VERSION:
  foreach (@vlist) {
    chomp;
    if (m,/CHECKEDOUT$,) {
      # Multiple (unreserved) checkouts appear multiple times
      next SNAP_VERSION if $seen{$_};
      $seen{$_}++; 
      ($xpn = $_) =~ s,CHECKEDOUT$,,;
      # unreserved checkouts - there can be several...
      foreach (grep(m,/CHECKEDOUT\.\d+$, , ctqx(qw(ls -short), $xpn))) {
   chomp;
   unless (ctsystem({stdout => 0, stderr => 0}, qw(ls -d), "$_/$elname")) {
     $found = "$_/$elname";
     last SNAP_VERSION;
   }
      }
    } else {
      # only check the LATEST in a non-checkout branch
      unless (ctsystem({stdout => 0, stderr => 0}, qw(ls -d), "$_/LATEST/$elname")) {
   $found = "$_/LATEST/$elname";
   last SNAP_VERSION;
      }
    }
  }
} else {
  # dynamic view
DYN_VERSION:
  foreach (@vlist) {
    chomp;
    if (m,/CHECKEDOUT$,) {
      # Multiple (unreserved) checkouts appear multiple times
      next DYN_VERSION if $seen{$_};
      $seen{$_}++; 
      ($xpn = $_) =~ s,CHECKEDOUT$,,;
      # unreserved checkouts - there can be several...
      foreach (grep(m,/CHECKEDOUT\.\d+$, , ctqx(qw(ls -short), $xpn))) {
   chomp;
   if (-d "$_/$elname") {
     $found = "$_/$elname";
     last DYN_VERSION;
   }
      }
    } else {
      # only check the LATEST in a non-checkout branch
      if (-d "$_/LATEST/$elname") {
   $found = "$_/LATEST/$elname";
   last DYN_VERSION;
      }
    }
  }
}
 
# Nothing; go ahead.
#
exit 0 unless $found;
 
$prompt = <
 
ERROR:  An element named "$elname" already exists in
        in some other version of "$dirname":
 
Instead of creating a new element, you probably want to
create a hard link to the existing element, like so:
 
% cleartool ln $found .
 
EOT
 
if ($ENV{CCASE_NO_CLEARPROMPT}) {
  print STDERR $prompt;
} else {
  clearprompt(qw(proceed -type error -default abort -mask abort -prompt),
         $prompt);
}
exit 1;

1.2.9 How can I see who changed what parts of a file and when?

Look up the "cleartool annotate" command.

1.2.10 Uncheckout leaves a zero version on branch; is this bad?

I consider it a nuisance. One common development strategy is for every developer to create their own development branch by using a view with a config spec similar to this one:

element * CHECKEDOUT
element * .../mybranch/LATEST
element * BASELINE -mkbranch mybranch
element * /main/0 -mkbranch mybranch

Note that whenever a developer starts working on some file he hasn't touched, a branch gets created at that point. As work progresses, more and more elements will have a "mybranch" branch.

Once in a while, the BASELINE label gets moved to the newest approved base line, and developers are encouraged to rebase via a merge. The merge will do something for every element that has a "mybranch" branch and for which the version that BASELINE selects changed since the branch was made.

If one leaves zero versions on the "mybranch" branch behind, then this will cause copy merges, which won't affect the logical concistency of your view of the source tree, but will affect performance. Besides, zero versions just look sloppy.

Most sites add a post-rmbranch and post-unco trigger to remove the dangeling branch with only a zero version. The following code implements this trigger. Paul D. Smith wrote this trigger, and I added some code to deal with snapshot view weirdness:

#!/bin/perl
 
#--------------------------------------------------------------------
# Nothing special needed Perl-wise - allow Rational Perl to be used
#--------------------------------------------------------------------
require 5.001;
 
#--------------------------------------------------------------------
# Debugging aid. Overloading the semantics of the standard
# CLEARCASE_TRACE_TRIGGERS EV: if it's set to -2 we dump
# the runtime environment into a file in the current dir.
#--------------------------------------------------------------------
if (int($ENV{CLEARCASE_TRACE_TRIGGERS}) <>
        $ENV{CLEARCASE_TRACE_TRIGGERS} & 0x2) {
   open (EV, ">rm_empty_branch.txt");
   print EV "$x=$y\n" while (($x,$y) = each %ENV);
   close EV;
}
 
#--------------------------------------------------------------------
# See if the user wants to suppress this trigger's actions:
#--------------------------------------------------------------------
exit 0 if $ENV{CCASE_NO_RM_EMPTY_BRANCH};
 
#--------------------------------------------------------------------
# Use safest quoting possible - java creates files with $ in
# them, so I can't just universally use "
#--------------------------------------------------------------------
$q = ($ENV{OS} eq "Windows_NT" ? '"' : "'");
 
# Remove empty branches: if a branch has no elements (except 0 of
# course) after an uncheckout or rmver, or the parent of a
# just-rmbranched branch is now empty, remove it.
#
# If the branch in question is /main, don't do anything (another option
# would be to rmelem the entire element, but that seems like a very bad
# idea to me).
#
# Install like this:
#
#   ct mktrtype -element -global -postop uncheckout,rmver,rmbranch #       -c "Remove empty branches after uncheckout, rmver, or rmbranch." #       -exec /TRIGGERS/rm_empty_branch RM_EMPTY_BRANCH
#
# CAVEATS:
#   - Ignores attributes!
#   - Won't remove branches where the 0th element is labeled.
#   - Will fail if any branch names contain spaces.
#   - May fail if any branch or label is named ".*" or "*" exactly.
#
# CREATED BY:
#   Paul D. Smith 
#
 
$xname = $ENV{CLEARCASE_XPN};
$xname =~ s,\\,/,g if $ENV{OS} eq 'Windows_NT';
 
# For uncheckout commands, if the version isn't 0 we can punt early
#
exit 0 if ($ENV{CLEARCASE_OP_KIND} eq 'uncheckout' &&
      $xname !~ m,/0$,);
 
# Don't try to remove the /main branch
#
($branch = $xname) =~ s,/[^/]*$,,;
exit 0 if $branch =~ m,\@\@/main$,;
 
# Check if there are other versions, other branches, labels, or checked
# out versions on this branch: if so, don't do anything.
#
if (opendir(D, $branch)) {
  # this opendir succeeds only in a dynamic view
  @other_stuff = readdir(D);
  closedir(D);
 
  # in an empty branch, there are four thingies:
  # ".", "..", "0" and "LATEST". If there are more, then
  # it isn't an empty branch.
  exit 0 if (scalar(@other_stuff) != 4);
} else {
  # version extended name space not available implies
  # we're in a snapshot view, and we will have to work
  # a little harder here...
  ($pname, $brpath) = split($ENV{CLEARCASE_XN_SFX}, $branch);
  # an rmbranch will not reload the element...
  system("cleartool update -log /dev/null $q$pname$q")
    if ($ENV{CLEARCASE_OP_KIND} eq 'rmbranch');
  @vtree = `cleartool lsvtree -branch $brpath $q$pname$q`;
  chomp($latest = pop(@vtree));
  $latest =~ tr,\\,/, if $ENV{OS} eq 'Windows_NT';
  exit 0 unless $latest =~ m,$brpath/0$,;
}
 
# Remove it!
system("cleartool rmbranch -force -nc $q$branch$q");
 
exit 0;

1.2.11 I can see a file on one branch, but I can't see it on another branch for the same directory. How can I get the file to appear on the other branch? (Frederick Sena )

You use the cleartool ln command. See 1.2.14 for figuring out how to specify the invisible file as an argument to that command.

1.2.12 When I do cleartool ls, it says "[eclipsed]". What does "eclipsed" mean? (Frederick Sena )

1.2.13 How to check in a file that hasn't change (or is empty) (Bygland, Brian )

Consider simply cancelling the checkout, but be aware that if the checkout was the result of a merge, cancelling the checkout will result in the merge arrow disappearing, forcing you to redo the merge next time. In this case, using the -identical flag of the checkin command may be preferable.

Answered by cg@miaow.com

1.2.14 How to see a file that your view / config spec can't see (Bygland, Brian )

The reason why you're not seeing the file is that the version of the directory selected by your view doesn't have a link to that file. Therefore the solution is to figure out which version of that directory element has the link.

The easiest way to access such a file is to use a different view, namely, one that selects the appropriate version of the directory. Even if you are set to some view that doesn't, you can use the so-called view extended path to refer to the other view. On UNIX, this is done be prepending /view/viewtag to the full pathname.

In some situations, it may not be convenient or safe to use view extended paths. One problem is that while you are using a view extended path, someone else may be changing the config spec of that view. For logging purposes in particular, you are better off storing the object id of the element and letting ClearCase tell you a patnname that is valid in your view. In other words, first take a view that selects your invisible file and run this:

% cleartool setview otherview
 
% cleartool describe -fmt '%On\n' path/to/invisible/file@@
03fcf938.39c011d5.b891.00:01:80:ab:ed:ac

Note that the final @@ at the end of the path tells ClearCase that you are interested in the element's object id. Omitting the final @@ would give you the version's object id, whih may or may not be what you need. Now to retrieve a pathname valid in your view, do this:

% cleartool setview myview
 
% cleartool describe -fmt '%p\n' oid:03fcf938.39c011d5.b891.00:01:80:ab:ed:ac
/vob/somevobtag/path@@/main/2/to/main/1/invisible/main/3/file@@

Note how we enter version extended name space (or history mode on NT) very early in the path. This is natural, since our premise is that the file isn't visible in our current view. These paths can become quite long, easily exploding NT's stupid limit on path name length or command line length if you every try to use that pathname for some command.

Note that this technique is very useful for tracking down relocations. If in the example above the file was relocated to some unknown but visibile location, Clearcase would have shown that location instead. ClearCase is amazingly smart in figuring out the shortest possible pathname to a specific element.

Answered by cg@miaow.com

1.2.15 What's the difference between %u and %[owner]p ?

%u designates the user who created the object or event, and %[owner]p is the actual owner. This can be confusing when you change ownerships of elements and expect %u to return the owner.

1.3 Queries

1.3.1 How to see what changed between two labels?

Assuming that you always label all visible elements, and assuming that LABEL1 is applied prior to LABEL2, the following will do:

1. In a view selecting LABEL1, run this to obtain those elements which have been unlinked in LABEL2

2.  % ct find someplace -element 'lbtype_sub(LABEL1) && !lbtype_sub(LABEL2)'\
3.                   -print

4. In a view selecting LABEL2, run this to obtain those elements which have been added since LABEL1:

5.  % ct find vobtag> -element 'lbtype_sub(LABEL2) && !lbtype_sub(LABEL1)'\
6.                   -print

7. In the same view, locate those versions that have changed since LABEL1. Note that this will include elements found in (2).

8.  % ct find vobtag -version 'lbtype(LABEL2) && !lbtype(LABEL1)'

1.3.1a How to see what changed between two timestamps?

Use the created_since query construct, like so:

% ct find vobtag -version 'created_since(early) && !created_since(later)'

Note that this will exclude versions created exactly at timestamp later, so it's not an excact equivalent of the label based queries described in 1.3.1.

1.3.2 how to determine the vob of an element?

% ct lsvob pathname

1.3.3 findmerge takes forever on a single element

In order to set up a merge, findmerge needs to find the common ancestor of two versions. In complicated version trees, there can be many common ancestors, and finding the best one requires a non-trivial graph traversal algorithm.

The main problem is that while findmerge is looking for a common ancestor, it is holding a lock on the vob DB, preventing others from writing to the DB. In order to prevent an inordinately long amount of time in the lock, findmerge will interrupt its search, release and reacquire the lock and restart it at ever increasing intervals. There are two environment variables that can be used to tweak this behaviour, which may improve performance for specific elements:

CLEARCASE_FM_TRANS_THRESHOLD(default: 128)

This is the base value for how many versions are examined prior to a restart. This value is doubled, tripled etc as needed after every restart. Increasing this value will avoid a restart in those cases where it was "almost there", but may increase the risk that other developers will experience vob DB timeouts.

CLEARCASE_FM_MAX_LEVEL(default: 65536)

This sets the maximum number of calls that will be made in the quest for the best common ancestor. This value is set very high by default. Setting this to a low value will definitely acceletate findmerge, but may cause it to choose a very old common ancestor, forcing the merge to step through many many old changes.

Check out tech note 747 for more details.

1.3.4 findmerge takes forever over a vob

There are two good techniques to speed up findmerge, often by orders of magnitude:

· Use -fver instead of -ftag. Unless you have a very unusual situation, -fver should be good enough.

· Add -element queries to restrict the number of elements tested. If, for example, you use the common development branch/delivery branch technique, and want to merge the latest from the delivery branch into your development branch, you really only need to consider those elements that have a development branch. So, using the following command instead of a straight findmerge can result in huge performance improvements:

·         % ct findmerge someplace\
·                        -element 'brtype(dev_branch)'\
·                        -fver .../del_branch/LATEST -merge

Using -avobs instead of recursive descent can also improve performance, but is a more risky techniques if directory merges are involved. Since you don't control the order of merges when using -avobs, you could conceivably merge a directory element which isn't visible yet because the parent directory element hasn't been merged yet. Using -avobs -visible will avoid error messages, but may cause you to miss some merges entirely.

1.4 Views & Config Specs

1.4.1 Why do people put /main/0 at the end of a config spec?

This is a good interview question... A typical development config spec looks like this:

element * CHECKEDOUT
element * .../mybranch/LATEST
element * BASELINE -mkbranch mybranch
element * /main/0 -mkbranch mybranch

or like this:

element * CHECKEDOUT
element * .../mybranch/LATEST
element * .../deliverybranch/LATEST -time sometime -mkbranch mybranch
element * /main/0 -mkbranch mybranch

They all contain the /main/0 rule at end because otherwise, you couldn't create new elements. When you create a new element, it will only have a /main/0 version. It wouldn't have a "mybranch" branch, nor a BASELINE label (in the first config spec) and wouldn't have a "deliverybranch" either. Therefore, right after the element was borne, it would disappear from your view since no valid version is selected and the -mkbranch operation couldn't take place. Adding the /main/0 rule ensures that you can always see a newly created element.

The typical follow-up interview question then is: "if everybody uses /main/0, how come I can't see somebody else's new element?". This is a consequence of directory versioning. In fact, two conditions must be satisfied if you are to see a specific element in your view:

· Your config spec must select a valid version

· Your config spec must select a version of the containing directory that has a link to that element.

It is easy to see that the second condition is not satisfied for other users who use their own development branches.

Well, why not use /main/LATEST instead of /main/0?

Using /main/LATEST may hide labeling errors. Suppose, for example, that there are elements that are missing the BASELINE label for some reason, and you are using a config spec like the first one above. If you use /main/0, you will get empty files and empty directories for such elements, which have a good chance of at least causing some visible warning or error at build time. If you use /main/LATEST, you may end up using the wrong version and not notice it.

There is yet another case where /main/0 is required, and is only indirectly related to creating new elements: intentionally empty directories or files.

When one creates an empty directory or file and later merges it onto a new branch, the findmerge algorithm will notice that the base contributor (/main/0) and the target contributor (also /main/0) are identical, and since the source contributor is empty it will skip the copy merge, leaving you with nothing. Therefore, you need to select /main/0 even in a config spec where no new elements are created, such as:

element * .../deliverybranch/LATEST -nocheckout
element * /main/0 -nocheckout

Oh, and how come Rational's documentation uses /main/LATEST all over the place? My guess it that it's a holdover from old documents, where more complex config specs were derived from the default config spec. It is a non-obvious step to replace main/LATEST with main/0, and even to this day, many people think that there is some kind of obligation to merge back into main...

1.4.2 I see checkouts to a view that no longer exists; how do I get rid of them?

First figure out the UUID of the view by running:

% cleartool describe -long vob:vobtag
versioned object base "vobtag"
  created 31-Dec-00.16:23:00 by ClearCase VOB admin account (vobadm.staff)
  VOB family feature level: 2
  VOB storage host:pathname "someplace"
  VOB storage global pathname "someplace"
  database schema version: 53
  VOB ownership:
    owner someone
    group some group
  Additional groups:
    ...
  VOB holds objects from the following views:
    ? [uuid c00c3821.f94411d4.ba94.00:01:80:a9:33:fe]
    ...

You then can remove all the references to the non-existing view by running:

% cleartool rmview -force -avobs -uuid c00c3821.f94411d4.ba94.00:01:80:a9:33:fe
Removing references ...

1.4.3 How do I rename a view?

First end the view server processes by entering:

% cleartool endview -server oldtag

If the view is used on multiple clients, you may have to go from client to client to terminate the view, otherwise you may later get "stale NFS file handle" error messages and possible hangups.

You then remove the view tag using the rmtag command:

% cleartool rmtag -view oldtag

If you also want to rename or relocate the storage directory, unregister the current location, rename and register the new location:

% cleartool unregister -view old-location
 
% mv old-location new-location
 
% cleartool register -view new-location

Now you can create a new tag using mktag:

% cleartool mktag -view -tag newtag new-location

You may need to specify more arguments (triple path) depending on the type of the view. You essentially need to replicate most of the arguments you used when creating the view.

1.4.4 How can I keep people from changing their config specs? (Evelyn Leeper )

You can experiment with setting various access permissions in the view stoirage directory, but this isn't really recommended. Instead, you should figure out why developers are changing the config specs and why this would be a problem.

My prefered method for dealing with this is to encourage the use of view maintenance wrapper script that include config spec generators. These scripts should fit in flawlessly with your process and simplify it. Your measure of success will be whether the developers prefer the scripts to hacking their own config specs.

1.4.5 How can I keep people from creating views? (Evelyn Leeper )

You can't and shouldn't, at least for those people who need access to the code. Obviously, you need to keep unauthorized people away, but this can be accomplished by properly setting access permissions to vobs. In other words, you can't prevent people from creating views but you can prevent them from seeing anything with them.

Views really are light weight and ephermal. Creating views shouldn't be a big deal. Your change process should encourage people to check in their work often and essentially treat views as temporary storage areas.

1.4.6 The message says: see view_log on server server-name. Where are the logfiles stored? (Frederick Sena )

They are located on the machine that serves the view. You can determine this machine by running:

% cleartool lsview -long viewtag
% ct lsview -l lt
Tag: viewtag
  Global path: someplace
  Server host: view-server
  Region: this-region
  Active: YES
  View tag uuid:48b5b6f8.752b11d4.a259.00:01:80:a9:33:fe
  View export ID (registry): 18
  ...

1.4.7 How to I cleanup view private files from my view? (Frederick Sena )

1.4.8 How do I recover stranded view private files?

1.4.9 How to I remove a view owned by some else, e.g: someone who left the company? (Frederick Sena )

1.4.10 Why do I get "Checked-out version is not selected by view." when I try to check out a file? (Frederick Sena )

This can happen if you a) check out a version in the version tree or b) if a mkbranch rule has been added to your config spec which creates a new branch that is not selected in your view.

Under Windows, open the version tree and look for the icon that resembles an "eye", it indicates which version is currently selected in your view. Now look for the checked out version (a circle with no version number in it), in most cases the branch is different! Examine your config spec to find out what has happened.

In Unix enter cleartool ls {element name} to find out which rule ClearCase uses to select the version, enter cleartool lsco {element name} to see which version has been checked out.

1.4.11 Why do I get "permission denied" when I try to checkout a file in someone else's view? (Frederick Sena )

1.4.12 How to list all of the files in my view that have never been checked-in to the VOB (Bygland, Brian )

1.4.13 Why deal with the "linked storage area"?

The "linked storage area" is the physical storage location of view private files. This storage location can be on external devices, for example filers, whereas the view database storage needs to be on a local disk.

Recent versions of ClearCase have relaxed this condition somehwat, and if you're using filer hardware recommended by Rational, you can put the whole view storage directory on the filer. In other words, linked storage areas are somewhat obsolescent and designed to work around a problem that is disappearing.

1.5 Vobs

1.5.1 what is the recommended vob size?

1.5.2 How (and why) do I split vobs?

1.5.3 Why do I have to lock vobs prior to backing them up?

You must lock vobs to keep the database consistent with the storage containers where all the data (element versions) is stored. Write operations which occur during the backup break the consistency, the keys in the database are not fully correct and you may have trouble accessing certain elements. You can use cleartool checkvob -pool -source to check for errors.

Remember, inconsistencies in the database are hard to notice without the cleartool checkvob command. You might think that everything is okay but months later you will run into problems.

1.5.4 Why deal with storage pools?

1.5.5 I see a hyperlink described as "? -> ?". How do I get rid of it?

Use the checkvob command to repair broken hyperlinks:

cleartool checkvob -hlinks [-force]

1.5.6 How do I rename a vob?

Every Vob has a tag in each region. The easiest way to rename your vob is to remove the tag and create a new one:

cleartool lsvob -l 
cleartool rmtag -vob 
cleartool mktag -vob  -tag  

Make sure that you use the correct host and access path for your new tag and add -public if necessary. Also, you must have access to the vob storage path or you won't be able to create the tag.

1.5.7 What are adminvobs? (Sakib Bhola )

As you know, metadata types like branches, labels, attributes etc. are stored per vob. Before you can create instances of them (e.g. mklabel) you have to create the object itself (e.g. mklbtype). If you have a project where different vobs belong together you usually want to make the types available in all vobs but of course you don't want to create them n times for n vobs. Admin vobs are made for this situation, you only create the types once in your admin vob and they are available in all other vobs which are linked to the admin vob.

In UCM, the project vob becomes automatically the admin vob and stores all necessary information.

1.5.8 What is in lost+found? ()

In lost+found ClearCase puts all elements which are not referenced anymore. Imagine you remove a directory with rmname or rmelem. ClearCase won't recursively remove the elements in this directory but they cannot be accessed neither. Instead, they will be moved to lost+found.

1.5.9 The VOB is Empty! Where did it go? How to Mount a VOB

1.6 Clearmake, DO's and Winkins

1.6.1 How to list all of the Derived Objects stored within a ClearCase VOB in the current view? Bygland, Brian

1.6.2 How to remove derived objects? Bygland, Brian

1.7 Process, Policies etc

1.7.1 Commonly used triggers?

1.7.2 How can I visualize the branching policy used at a site?

The following script will take as input the output of any set of "lsvtree -short" commands, for example the output produced by:

% cleartool find . -print | xargs cleartool lsvtree -s

It will then produce an indented list of branch/sub-branch relationships and also detect cycles (for example if you have both /main/x/y and /main/y/x).

#!/bin/perl
 
while (<>) {
  next unless m,\@\@/(main/[^\@]*)$,;
  $branches = $1;
  @branches = split('/', $branches);
  pop(@branches);  # remove version
  $parent = shift; # parent is first entry (main)
  for $branch (@branches) {
    $ancestor_of{$branch}{$parent} = 1;
    $offspring_of{$parent}{$branch} = 1;
    $parent = $branch;
  }
}
 
make_node('main', 0);
 
sub make_node {
  my ($parent, $indent) = @_;
 
  print(' 'x$indent.$parent."\n");
  my $children = 0;
  my @offspring = sort(keys(%{$offspring_of{$parent}}));
  for my $child (@offspring) {
    if (scalar(keys(%{$ancestor_of{$child}})) == 1) {
      # a "real" child is one that has only one ancestor: $parent
      $children = 1;
      make_node($child, $indent+2);
    }
  }
  if (@offspring && !$children) {
    print(' 'x$indent.'  Cycle detected: '.join(' ',@offspring)."\n");
  }
}