Skip to content

Setting up a ND development software area

Alessandro Thea edited this page Oct 1, 2024 · 1 revision

Instructions for setting up a Near Detector software area based on a recent nightly build

Reference links:

  1. create a new software area based on the latest nightly build (see step 1.iv for the exact dbt-create command to use)

    1. The steps for this are based on the latest instructions for daq-buildtools

    2. As always, you should verify that your computer has access to /cvmfs/dunedaq.opensciencegrid.org

    3. If you are using one of the np04daq computers, and need to clone packages, add the following lines to your .gitconfig file (no need to activate proxy globally, so you won't forget to disable it...):

      [http]
        proxy = http://np04-web-proxy.cern.ch:3128
        sslVerify = false
      
    4. Here are the steps for creating the new software area:

      cd <directory_above_where_you_want_the_new_software_area>;
      source /cvmfs/dunedaq.opensciencegrid.org/setup_dunedaq.sh;
      setup_dbt latest;
      dbt-create -c -n NND24-01-02 <work_dir>   
      cd <work_dir>;
      
    5. Please note that if you are following these instructions on a computer on which the DUNE-DAQ software has never been run before, there are several system packages that may need to be installed on that computer. These are mentioned in this script. To check whether a particular one is already installed, you can use a command like yum list libzstd and check whether the package is listed under Installed Packages.

  2. add any desired repositories to the /sourcecode area. An example is provided here.

    1. clone the repositories (the following block has some extra directory checking; it can all be copy/pasted into your shell window)
      # change directory to the "sourcecode" subdir, if possible and needed
      if [[ -d "sourcecode" ]]; then
          cd sourcecode
      fi
      # double-check that we're in the correct subdir
      current_subdir=`echo ${PWD} | xargs basename`
      if [[ "$current_subdir" != "sourcecode" ]]; then
          echo ""
          echo "*** Current working directory is not \"sourcecode\", skipping repo clones"
      else
          # finally, do the repo clone(s)
          git clone https://github.com/DUNE-DAQ/daqconf.git -b develop
          git clone https://github.com/DUNE-DAQ/nddaqconf.git -b develop
          git clone https://github.com/DUNE-DAQ/lbrulibs.git -b develop
          cd ..
      fi
      
      
  3. setup the work area, possibly install the latest nanorc version, and build the software

    dbt-workarea-env
    dbt-build -j 20
    dbt-workarea-env
    
    
  4. prepare a nddaqconf.json file, such as the one shown here. This sample includes parameter values that select the PACMAN data type. (Please note the additional comments on this sample file that are included below!)

    {
     "detector": {
         "op_env": "integtest"
     },
     "daq_common": {},
     "boot": {
         "connectivity_service_port": 15005
     },
     "hsi": {
         "random_trigger_rate_hz": "1.0"
     },
     "timing": {},
     "readout": {},
     "trigger": {
         "trigger_window_before_ticks": "2500000",
         "trigger_window_after_ticks": "2500000",
         "mlt_merge_overlapping_tcs": false
     },
     "dataflow": {
         "apps": [
             {
                 "app_name": "dataflow0"
             }
         ]
     }
    }
  5. prepare a data-readout map file (e.g. my_dro_map.json), listing the detector streams (true or fake) that you want to run with, e.g.:

    [
     {
         "src_id": 0,
         "geo_id": {
             "det_id": 32,
             "crate_id": 0,
             "slot_id": 0,
             "stream_id": 0
         },
         "kind": "eth",
         "parameters": {
             "protocol": "zmq",
             "mode": "fix_rate",
             "rx_iface": 0,
             "rx_host": "localhost",
             "rx_mac": "00:00:00:00:00:00",
             "rx_ip": "0.0.0.0",
             "tx_host": "localhost",
             "tx_mac": "00:00:00:00:00:00",
             "tx_ip": "0.0.0.0"
         }
     }
    ]
  6. Generate a configuration, e.g.:

    nddaqconf_gen -c ./nddaqconf.json --detector-readout-map-file ./my_dro_map.json my_test_config
    
  7. nanorc <config name> <partition name> boot conf start_run <run number> wait 60 stop_run scrap terminate

    • e.g. nanorc my_test_config ${USER}-test boot conf start_run 111 wait 60 stop_run scrap terminate
    • or, you can simply invoke nanorc my_test_config ${USER}-test by itself and input the commands individually
  8. When you return to working with the software area after logging out, the steps that you'll need to redo are the following:

    • cd <work_dir>
    • source ./env.sh
    • dbt-build # if needed
    • dbt-workarea-env # if needed
  9. For reference, here are daqconf.json and dro_map.json files for MPD

    Sample daqconf.json for MPD
    {
    "detector": {
      "op_env": "integtest",
      "clock_speed_hz": 62500000
    },
    "daq_common": {},
    "boot": {
        "connectivity_service_port": 15049
    },
    "hsi": {},
    "timing": {},
    "readout": {},
    "trigger": {
        "trigger_window_before_ticks": 30000,
        "trigger_window_after_ticks": 30000,
        "mlt_merge_overlapping_tcs": false
    },
    "dataflow": {
        "apps": [
            {
                "app_name": "dataflow0"
            }
        ]
    },
    "dqm": {}
    }
    [
    {
      "src_id": 0,
      "geo_id": {
          "det_id": 33,
          "crate_id": 0,
          "slot_id": 0,
          "stream_id": 0
      },
      "kind": "eth",
      "parameters": {
          "protocol": "zmq",
          "mode": "fix_rate",
          "rx_iface": 0,
          "rx_host": "localhost",
          "rx_mac": "00:00:00:00:00:00",
          "rx_ip": "0.0.0.0",
          "tx_host": "localhost",
          "tx_mac": "00:00:00:00:00:00",
          "tx_ip": "0.0.0.0"
      }
    }
    ]

Notes about the use of localhost in daqconf.json and dro_map.json files

Starting with dunedaq-v4.0.0, when we specify a hostname of "localhost" in a daqconf.json or dro_map.json file, that hostname is resolved at configuration time, using the name of the host on which the configuration is generated. This is handled by the code in the daqconf package, and it is done to prevent problems in situations in which some of the hosts are fully specified and some are simply listed as localhost. Such a mixed system can be problematic since the meaning of "localhost" will be different depending on when, and on which host, it is resolved. To prevent such problems, localhost is now fully resolved at configuration time.

This has ramifications that should be noted, however. Previously, when localhost-only system configurations were run with nanorc, the DAQ processes would be started on the host on which nanorc was run. With the new functionality, however, the DAQ processes that had a hostname of "localhost" will always be run on the computer on which the configruation was generated, independent of where nanorc is run.

Instructions for using the HDF5LIBS_TestDumpRecord utility

This utility can be used to print out information from the HDF5 raw data files. To invoke it use

  • HDF5LIBS_TestDumpRecord <filename>

Getting an overview of the HDF5 file structure

h5dump-shared -H <filename>

Dumping the binary content of a certain block from HDF5 file

This is another use of the h5dump-shared utility. This case uses the following command-line arguments:

  • the HDF5 path of the block we want to dump (-d )
  • the output binary file name (-o <output_file>)
  • the HDF5 file to be dumped

An example can be found in the Far Detector installation instructions.

Once you have the binary file, you can examine it with tools like Linux od (octal dump), for example

od -x dataset1.bin

Sample integration tests

There are a few integration tests available in the integtest directory of the lbrulibs package. To run each of them:

  • PACMAN integration test: pytest -s test_pacman-raw.py --nanorc-option partition-number 2
  • MPD integration test: pytest -s test_mpd-raw.py

Examples to use debugging utilities

The mpd-hdf5decoder-RAW.py script can be used to spitout the content stored in the hdf5 file.

python mpd-hdf5decoder-RAW.py <hdf5_mpd_file.hdf5>

There are two scripts to send data via zmq link:

Both come with a set of options. See the options with the -h option, i.e: python mpd-generator-RAW.py -h. For the MPD utility, use the --random-size option to send variable size packets.