DataBlaster
Code Instrumentation Tool for Visualization
W.Bethel, Lawrence Berkeley Laboratory Visualization Group
2 June 1998
Introduction
The intent of this toolset is to provide the means to easily move data from simulations to visualization tools. Unlike some other code instrumentation tools (such as CUMULVS), there are absolutely no dependancies or restrictions with respect to MP computing environments.
The underlying presumption in this toolkit is that the overall goal is to send data, which can be up to a five-dimensional array of double precision floating point values, from a computational source to a consumer. The underlying data must be reducable to a contiguous chunk of memory. This tool makes use of the "eXternal Data Representation" (XDR) libraries for transmission and translation from one architecture to another. Therefore, you can send compute data on an 8-byte-word big-endian machine and consume the data on a 4-byte-word little endian workstation. XDR takes care of all the architecture representation issues (thus, XDR must be present on both client and server machines).
This toolkit contains:
-
- Source code for an AVS Coroutine module
- This module acts as the "server" module. It will sit and wait for a connection on a socket from anywhere on the net. Simulation data that arrives on the socket is translated into an AVS field, and then sent to downstream modules in the AVS network (if any). There are limitations w.r.t. how the abstract 5-d data array is mapped into an AVS field. Presently, an AVS field which is constructed which uses this template: uniform, double precision floating point, scalar. Note that source code could be pirated from the example AVS module for the purposes of implementing other types of fields, such as irregular, vector, integer, etc.
-
- Source code for example data generator
- This is a simple C-program that is to be run from the command line. It shows how to use the communication tools in the provided socket library. This program first connects to a server (see #3 or #1), then for some number of simulation steps, computes data and transfers it to the server.
-
- Source code for example data receiver
- Similar in function to the AVS Coroutine (#1 above), except that this is a simpler C-program that is to be run from the command line. It waits for a connection on a socket, then receives data from a client. It doesn't actually do anything with the data.
-
- Source code for low-level socket communication tools
- This C-source code implements the transport layer used by the DataBlaster clients and servers using standard Berkeley sockets.
User Configuration
In the source code distribution, the hsocket.h file contains two #define's that are to be configured by the user:
#define SERV_TCP_PORT 8118 #define SERV_HOST_NAME "bozo.lbl.gov"
The first, SERV_TCP_PORT, is the port number over which communications will take place. The second, SERV_HOST_NAME, is the fully-qualified name of the machine which will run the server code.
At the very least, you'll need to change SERV_HOST_NAME to be the name of the machine where you'll run the server. The port number is somewhat arbitrary. Consult local man pages for information about which port number ranges to steer clear of when you choose a port number.
Tested Architectures
Cray T3E (client) Sun Ultra, Solaris 2.5.1 (client, server, AVS) SGI Onyx, Irix 6.4 (client, server, AVS)
Download Source
Download the source code! Grab this gzipped tarfile.
Sample Results
Greg Kilcup, a visiting researcher, needed a tool which could be used to instrument his code to visualization tools. The basic requirement was for an output "data pipe." His simulation runs on the T3E in an MPI coding environment.
The simulation is an effort to solve the lattice "gauge fixing" problem. The vector components at each node in the gauge field has one of several possible space-time orientations. Since there are literally a countless number of possible permutations which are theoretically valid, researchers around the world face obstacles in sharing data and results, because there is no fixed source data for use in reproducing experiments and simulations. One of Greg's goals is to create a number of "fixed gauge" data sets for use by multiple researchers. Fixed gauges are an important intermediate step in the computation of quark masses.
The simulation attempts to minimize the overall transformation incurred during a walk through the lattice. When the transformation is minimized, the matrix trace of the transformation is close to unity. The visualization above illustrates the matrix trace of the simulation. Areas of high opacity are indicative of close-to-unity matrix traces.