[intrepid-notify] V1R4M0 installed on ALCF/Intrepid

Tisha Stacey tstacey at alcf.anl.gov
Mon Nov 2 18:16:26 CST 2009


Dear Users,

Today, IBM's latest driver, V1R4M0 was made active on Intrepid. While
most, if not all, of your applications compiled for V1R3M0 or V1R2M0
will likely continue to work without issue, to take advantage of bug
fixes, performance improvements, and new features, you may wish to
recompile or relink your code.  Please note that V1R4M0 changes the
default python revision from 2.5 to 2.6.  We are currently building and
testing new versions of mpi4py and numpy to allow for their use with
Python 2.6. There are no changes in job submission, charging, or
scheduling. Please address support at alcf.anl.gov with all questions,
comments, or concerns.

Following is IBM's "Memo to Users":

The Blue Gene/P V1R4M0 Memo To Users:
-----------------------------------------------------------
Document Title
Blue Gene/P V1R4M0 Memo To Users

Multiple application threads per core
A new environment variable has been introduced that controls the
number of application threads that can exist on each core:
BG_APPTHREADDEPTH=x where x can be 1, 2, or 3. By default, a value of
1 is used. If a number larger than 3 is specified, the maximum value
of 3 is used. Setting this environment variable to a value greater
than 1 must be done with an understanding that the Compute Node Kernel
does not provide a preemptive thread scheduler.

Binary Core File Generation
A new environment variable, BG_COREDUMP_BINARY, was added to allow the
creation of a full binary core file. The value supplied to this
environment variable specifies the MPI ranks for which a binary core
file will be generated rather than a lightweight core file. This type
of core file can be used with the GNU Project Debugger (GDB). If this
variable is not set, all ranks will generate a lightweight core file.
The variable must be set to a comma-separated list of the ranks that
will generate a binary core file or "*" (an asterisk) to have all
ranks generate a binary core file.

Building Applications
When building an application in a cross-compile environment such as
Blue Gene/P, build tools like configure and make will sometimes
compile and execute small code snippets to identify characteristics of
the target platform as part of the build process. If these code
snippets are compiled with a cross-compiler and then executed on the
build machine instead of the target machine, the program might fail to
execute or produce results that do not reflect the target machine.
When that happens, the configure will fail or not configure as
expected. To avoid this problem, the Blue Gene/P system now provides a
way to transparently run Blue Gene/P executables on a Blue Gene/P
partition when executed on a Blue Gene/P Front End Node. Configuration
of this feature is found in the IBM System Blue Gene Solution: Blue
Gene/P System Administration Redbook, SG24-7417. Usage information is
provided in the IBM System Blue Gene Solution: Application Development
Redbook, SG24-7287.


The following is a list of enhancements that are included in Version
1.0 Release 4.0 (V1R4) of IBM Blue Gene/P:

/jobs directory added to the I/O node
Before CIOD starts a job, it creates a directory called /jobs/<jobId>,
where <jobId> is the ID of the job as assigned by MMCS. This directory
can be accessed by jobs running on the compute nodes connected to the
I/O node or by tools running on the I/O node.

Move to GBD 6.6
GBD was upgraded to 6.6.

A tool chain with GCC 4.3.2 is available at the following open source Web site:
//bg-toolchain.anl-external.org/wiki/index.php/Main_Page

With GCC 4.3.2, you get GOMP, the GNU version of OpenMP. GOMP has not
been thoroughly tested; however, initial trials have worked.

Thanks,
The ALCF Support Team


More information about the intrepid-notify mailing list