Tags:
create new tag
, view all tags

Dannowitz TWiki Logbook

February 2012

Mapping

  • Compiled mapping from Florian (St3), Lamiaa (St1+2), Evan (v1495 cards on hodoscopes), and previous mapping into one file, found at
    /seaquest/production/mapping/Mapping.txt

  • Need to start storing mapping with the productions, as each schema will have its own mapping. The deCODA currently looks to calmap for the mapping, but must eventually look to the schema in which its decoding to.

  • Talk to Markus at some point regarding the framework from start to finish of creating a schema, preparing it for data (mapping, calibration), decoding, and any post-processing. Should all be done by decoding daemon?

Trigger Masking

  • Add indexes to production_mix_d503g541.Hit:

ALTER TABLE `production_mix_d503g541`.`Hit`
ADD INDEX `trackID` (`trackID` ASC) 
, ADD INDEX `particleID` (`particleID` ASC)
, ADD SPATIAL INDEX `x` (`xAtDigiPlane` ASC) 
, ADD SPATIAL INDEX `y` (`yAtDigiPlane` ASC) 
, ADD SPATIAL INDEX `z` (`zAtDigiPlane` ASC)  
, ADD INDEX `detectorName` (`detectorName` ASC) 
, ADD INDEX `source` (`source` ASC) 
, ADD INDEX `digiPlanesID` (`digiPlanesID` ASC) ;

  • This did not work since there are NULL values in the x/y/zAtDigiPlane fields, and you cannot add a SPATIAL index on a field that has NULL values. Instead did the following:

ALTER TABLE `production_mix_d503g541`.`Hit`
ADD INDEX `trackID` (`trackID` ASC) 
, ADD INDEX `particleID` (`particleID` ASC) 
, ADD INDEX `detectorName` (`detectorName` ASC) 
, ADD INDEX `source` (`source` ASC) 
, ADD INDEX `digiPlanesID` (`digiPlanesID` ASC) ;

  • "ERROR 2013: Lost connection to MySQL server during query". Now going to just run the query by submitting the script directly onto seaquel.physics machine.
     nohup mysql -u root -p < index.sql > output.out 
    This will run the query without any issue of losing connection to the server.

TWiki

  • Begin work log.
  • Began personal To-Do list
  • Began categorizing all of the links on the front page in an attempt to make the main page one page long (see: The HERMES Wiki Main Page)

Trigger Masking

  • Indexing is complete

  • There is now an indexed mix of 896k events. A few have only gun hits. After checking, there are indeed events in the dimuon run that have no entries in the Hit table. If need be, these events can be isolated and deleted.

  • After being prompted by Josh, Checked to get a ballpark of how many hits there are from a gun event. Came up with the following:
eventID, COUNT(*)
1, 892
2, 707
3, 1000
4, 642
5, 1279
6, 977
7, 778
8, 1128
9, 799
10, 884
11, 916
12, 813
13, 1044
14, 578
15, 1154
16, 508
17, 1307
18, 715
19, 745
20, 636

These hits will be smeared over 5+ following events, so the number of chamber hits will increase by upwards of a factor of 5. Since these numbers are rather large to begin with,

  • Added a new "realEventID" column to the Hit table. When I "smear" the hits, I'll preserve their source event. One will be able to look only at hits from a given event by selecting Hits WHERE eventID=realEventID.
ALTER TABLE Hit ADD COLUMN realEventID INT AFTER eventID;
UPDATE Hit SET realEventID = eventID;

  • In order to keep the first iteration of this process fast and simple, I'm creating a production "production_mix_d503g541_small" which will contain only the first 10k events. This will allow each step to proceed faster so that I can work out the kinks in short order instead of waiting, say, an hour adding a field to a giant Hit table.
  • Further, Josh believes, and I concur, that we won't need that many events worth of hits to begin with.

CREATE DATABASE production_mix_d503g541_small;
USE production_mix_d503g541_small;
CREATE TABLE Hit LIKE production_mix_d503g541.Hit;
INSERT INTO Hit
   SELECT * FROM production_mix_d503g541.Hit
   WHERE eventID < 10000;

  • The rest of the tables are copied over as well in a similar fashion (10k events worth of relevant data).

  • Developed a snipped of code that will create a stored procedure that will perform the smearing:

delimiter $$

CREATE DEFINER=`production`@`%` PROCEDURE `smear`()
BEGIN

  DECLARE evNum INT DEFAULT 1;
  DECLARE v1 INT DEFAULT 1;
  DECLARE smearNum INT DEFAULT 0;

  CREATE TEMPORARY TABLE tempHits LIKE Hit;

  WHILE evNum <= 896000 DO
 
    INSERT INTO tempHits
    SELECT * FROM Hit
    WHERE eventID=evNum AND detectorName LIKE 'XXX';

    # Your standard index
    SET v1 = 1;
    # The number of event that these hits will smear over
    #   for this specific chamber
    SET smearNum = 5;

    WHILE v1 <= smearNum DO

        INSERT INTO Hit (
            `hitID`, `eventID`, `realEventID`, `trackID`, `digiPlanesID`, 
            `particleID`, `elementID`, `driftTime`, `driftDistance`, 
            `xAtDigiPlane`, `yAtDigiPlane`, `zAtDigiPlane`, `detectorName`, 
            `geantName`, `source`)
        SELECT `hitID`, (eventID+v1), `eventID`, `trackID`, `digiPlanesID`, 
            `particleID`, `elementID`, `driftTime`, `driftDistance`, 
            `xAtDigiPlane`, `yAtDigiPlane`, `zAtDigiPlane`, `detectorName`, 
            `geantName`, `source`
        FROM tempHits;
        
        SET v1 = v1 + 1;

    END WHILE

    SET evNum = evNum + 1;
  END WHILE;
END$$

TWiki

  • Really liking the multi-column formatting at https://hermes-wiki.desy.de/Hardware_and_Computing, I have tried to replicate its style. Short of just using straight-up HTML coding, which I find to be outside the spirit of TWiki's easy and fast formatting, there does not seem to be a way to natively replicate this style (table-in-a-table).
  • There is a TWiki plug-in that allows MediaWiki table formatting, MediaWikiTablePlugin. I have requested that the admins install this plugin. It may very well come in handy with this restructuring.
  • Contacted by Danny Tang regarding the URL of our TWiki.

Miscellaneous

Trigger Masking

  • Back of the envelope calculation for how many events to smear hits over in a given detector. The idea is that the signal from a hit in a chamber will persist over several subsequent triggered events; that number depending on wire spacing.

Approximate drift speed: 5 cm/us
Beam rate: 2x10^12 protons/s
"Bucket" size: 40 000 protons/bucket
=================================
"Smear" factor: 10 buckets/cm

  • This will be applied to each of the detectors. Since the farthest distance the ions in a chamber will traverse is 1/2 of the anode-to-anode distance (wire spacing), this smear factor will be multiplied by 1/2*spacing to get the number of total smeared events per hit. For example, a detector with 1.0cm wire spacing will have its hits smeared into the following 4 events (10 buckets/cm * (1.0 cm)/2 = 5 total buckets (events)).

mysql> SELECT detectorName, spacing, CEIL(spacing*5) FROM geometry_R537.digiPlanes WHERE detectorName NOT LIKE 'H%';
+--------------+---------+-----------------+
| detectorName | spacing | CEIL(spacing*5) |
+--------------+---------+-----------------+
| ST1X         |   0.635 |               4 |
| ST1X'        |   0.635 |               4 |
| ST1U         |   0.635 |               4 |
| ST1U'        |   0.635 |               4 |
| ST1V         |   0.635 |               4 |
| ST1V'        |   0.635 |               4 |
| ST2X         |  2.0828 |              11 |
| ST2X'        |  2.0828 |              11 |
| ST2U         |  2.0193 |              11 |
| ST2U'        |  2.0193 |              11 |
| ST2V         |  2.0193 |              11 |
| ST2V'        |  2.0193 |              11 |
| ST3TX        |       2 |              10 |
| ST3TX'       |       2 |              10 |
| ST3TU        |       2 |              10 |
| ST3TU'       |       2 |              10 |
| ST3TV        |       2 |              10 |
| ST3TV'       |       2 |              10 |
| ST3BX        |   1.016 |               6 |
| ST3BX'       |   1.016 |               6 |
| ST3BU        | 0.98552 |               5 |
| ST3BU'       | 0.98552 |               5 |
| ST3BV        | 0.98552 |               5 |
| ST3BV'       | 0.98552 |               5 |
| P1Vf         |       5 |              25 |
| P1Vb         |       5 |              25 |
| P1Hf         |       5 |              25 |
| P1Hb         |       5 |              25 |
| P2Vf         |       5 |              25 |
| P2Vb         |       5 |              25 |
| P2Hf         |       5 |              25 |
| P2Hb         |       5 |              25 |
+--------------+---------+-----------------+
32 rows in set (0.00 sec)

  • This allows us to group the detectors in terms of their spacing:
    • ST1% - 4
    • ST2% - 11
    • ST3T% - 10
    • ST3B% - 5
    • P% - 25

  • There will then be 5 procedures made to handle the smearing of the hits in each of these groups of detectors. All of them will be found in production_mix_d503g541. They will output their smeared hits into a "smearHit" table.
    • smear_ST1()
    • smear_ST2()
    • smear_ST3T()
    • smear_ST3B()
    • smear_P()

  • Before running any of these, there were 8 793 091 entries in the Hit table. The following is how many hits each of the procedures created:
    • smear_ST1() - 10 599 480
    • smear_ST2() - 15 171 035
    • smear_ST3T() - 4 139 920
    • smear_ST3B() - 1 895 575
    • smear_P() - N/A

  • Until I am told that it is needed, I'm not smearing the proportional tube hits yet. This is because doing so would add ~35M more hits to the table.

ROOT Studies

  • Installing ROOT from source, as there is no longer a packaged version in the Ubuntu software repository
  • Compilation is taking quite some time (>2h) on my netbook
  • Sylvester suggests (next time) using "make -j 4" to spawn several parallel make processes, as he says it makes a significant difference in compile time

Scatter Plot of smearHits

The goal is to have a scatterplot of all hits of a given event (bucket of protons) and visualize how they smear over into the following events. Beyond that, if I'm able to superimpose the lines of the hodoscopes along with which of them light up within a single event, that would be very helpful in seeing how beneficial the masking will be.

  • New hits in bright red
  • Hits in previous events in lighter red, even older events' hits will be in lighter and lighter (orange, yellow, etc).
  • Add black lines for boundaries of detector
  • Add blue lines for hodoscope paddles
  • shade regions where a hodoscope is hit

First things first though, let's get the first two bullets taken care of.

Trigger Masking

  • In order to relate a chamber hit to a hodoscope hit, we first must make a lookup table that relates wires to paddles.

  • Talking to Josh, he uses this snippet in Mathematica to get the vectors for the individual wires:

v0 = {
          (tP[[kxSpacing]] (el - (tP[[kxNumElements]] + 1)/2) + 
          tP[[kxxPrimeOffset]]) Cos[tP[[kxAngleFromVert]]*\[Pi]/180],
          (tP[[kxSpacing]] (el - (tP[[kxNumElements]] + 1)/2) + 
          tP[[kxxPrimeOffset]]) Sin[tP[[kxAngleFromVert]]*\[Pi]/180],
          0
     };
     
v1 = {
         -Sin[tP[[kxAngleFromVert]]*\[Pi]/180], 
         Cos[tP[[kxAngleFromVert]]*\[Pi]/180], 
         0
     };

  • tP is an array holding plane info, and "el" is the element number
  • This makes two 3D vectors:
    • v0 is a vector that points to the wire (presumably from the x,y origin)
    • v1 is a unit vector in the wire direction
    • Both are in the same plane

Pre-Beam Tasks

  • The decoding daemon is now up and running, performing with few errors.

  • The server is swarmed with the previous test_% schemas. Time to clean those up...

  • Wrote this bash script, piped it to an .sql file, and submitted it to the server.

<blockquote.

#!/bin/bash
        for i in `seq 1 10`;
        do
                echo $i
        done

  • Very slow event rate. Should profile where the slow-down is. Should also enable full sampling mode.

  • Run 56 and 64 are throwing errors:

mdiefent@e906-gat2:~ decode --run 56
/seaquest/e906daq/coda/run_descriptor/run_descript_56.dat
Message successfully transmitted, ID=509
Database connection: Success

Loading... "/data2/e906daq/coda/data/run_000056.dat"
Opening CODA File Status: OK!

End of Spill Event Processing Error in Event 263


mdiefent@e906-gat2:~ decode --run 64
/seaquest/e906daq/coda/run_descriptor/run_descript_64.dat
Message successfully transmitted, ID=517
Database connection: Success

Loading... "/data2/e906daq/coda/data/run_000064.dat"
Opening CODA File Status: OK!

Warnings have occured from the slow control event upload:

Level    Code    Message
Warning    1265    Data truncated for column 'value' at row 185    

  • Looking first to run 56, we see the following in the End Spill Event in question:

physicsEvent[0]=0000006f
physicsEvent[1]=000c10cc
physicsEvent[2]=00000004
physicsEvent[3]=c0000100
physicsEvent[4]=00000107
physicsEvent[5]=0000000c
physicsEvent[6]=00000000
physicsEvent[7]=00000027
physicsEvent[8]=00020107
physicsEvent[9]=11111111
physicsEvent[10]=11111111
physicsEvent[11]=00002b0c
physicsEvent[12]=e906f00f
physicsEvent[13]=00000fff
physicsEvent[14]=e906f003
physicsEvent[15]=00000000
physicsEvent[16]=00000055
physicsEvent[17]=00000061
physicsEvent[18]=0000000d
physicsEvent[19]=00000021
physicsEvent[20]=0000000e
physicsEvent[21]=0000000b
physicsEvent[22]=00000025
physicsEvent[23]=0000000e
physicsEvent[24]=0000001e
physicsEvent[25]=0000000d
physicsEvent[26]=0000001f
physicsEvent[27]=0000000d
physicsEvent[28]=00000137
physicsEvent[29]=00000000
physicsEvent[30]=00000000
physicsEvent[31]=00000000
physicsEvent[32]=00000000
physicsEvent[33]=00000000
physicsEvent[34]=00000000
physicsEvent[35]=00000000
physicsEvent[36]=00000000
physicsEvent[37]=00000000
physicsEvent[38]=00000000
physicsEvent[39]=00000000
physicsEvent[40]=00000000
physicsEvent[41]=00000000
physicsEvent[42]=00000000
physicsEvent[43]=00000000
physicsEvent[44]=00000000
physicsEvent[45]=00000000
physicsEvent[46]=00000000

  • Originally, Spill events (event type 11 and 12) only contained spill information. It turns out here that they also decided to add trigger information (0xe906f00f) and scaler information (0xe906f003). It is yet to be seen if this was for this one event or for all from here on.

Decoder Profiling

  • Trying valgrind's tool, callgrind.
    valgrind --tool=callgrind ./decoda -f data/run_000075.dat -m 1 -d test_profiling

  • After this, I use kcachegrind on the output callgrind file. This renders a tremendous amount of information, numerically and graphically. See below for a plot

profile_nosprintf.png

  • The profile seems to say that most of the time is taken up by creating the INSERT statements. An alternative for this would be to use "prepared statements". I recall trying these before and noting that they were a bit of a slow-down, but that was very long ago, and I may not have done it right.

  • After some thought, I recall that a prepared statement doesn't automatically create a bulk INSERT if you BIND it several times.

  • Before I try anything radically new, I'll try removing the indexes from the tables and see how that affects performance.

  • For run_00075.dat:
    • 2.73 events/s with indexes
    • 2.81 events/s without indexes

  • Therefore, indexing is NOT the culprit of the incredible slowdown.

  • Sprintf, as seen in the profile above is the main bottleneck in both runs. There must be a better, faster way to put together these bulk INSERT queries.

  • Attempting to create a bulk prepared statement....
    • Successfully created the statement, assigned the values, and bound a set of values to the statement
    • Now, all

Run Debugging

  • Reproducing the error from run_00056.dat:

Git-ing it On

  • Added the CalMap Uploader and the Decoder to the /seaquest/repository

  • Markus fixed permissions so that everyone in the e906 group can add and edit

IDE Time

  • With the full remote versioning of the software with Git, and after being inspired by Josh's xCode setup, I've decided to keep a tailored IDE environment on my personal computer.

  • I'm starting with Eclipse, since it is widely used and supported. I may try out Netbeans later, but one at a time.

  • Installed Eclipse and Eclipse-CDT for it's C/C++ support

  • Enabled code folding with (Window ‣ Preferences ‣ C/C++ ‣ Editor ‣ Folding). This may cause performance issues, so I'll keep an eye on it. Applicable keyboard shortcuts are:
    • Enable Folding — Ctrl + Numpad Divide
    • Expand All — Ctrl + Numpad Multiply
    • Collapse All — Ctrl + Shift + Numpad Divide
    • Reset Structure — Ctrl + Shift + Numpad Multiply

  • Enabled line numbering (Window ‣ Preferences ‣ General ‣ Text Editors)

  • After becoming furious over project management in Eclipse, I have already abandoned it and am now trying out NetBeans IDE 7.1 for C/C++

  • Very nice interface, but unfortunately, I've made the decoder 64-bit compatible, and my system is still 32-bit.

  • I'll put this IDE project on hold until I upgrade my system to 64-bit.

Trigger Masking

  • Progress has been made regarding an important part of creating the lookup table: Finding the specific wire number, given an (x,y) coordinate.

  • The equation is as follows:

Given a spacepoint, (x_p, y_p),

n = b_x * cos(theta) / c + b_y * sin(theta) / c

where

n = element# of the wire
theta = angle from vertical
c = wire spacing
N = number of wires
x_off = offset distance for primed chambers

b_x = x_p + c*(N+1)/2 * cos(theta) - x_off * cos(theta)

b_y = y_p + c*(N+1)/2 * sin(theta) - x_off * sin(theta)

  • This n will not be an integer, so it will have to be either rounded, ceiling'd, or floor'd. To be decided.

  • The idea right now is to make a C script that:
    • Figures out the (x,y) coordinates of the corners of each paddle
    • Calculates the wire number for a nearby chamber at each corner
    • Take the min and max wire number, and sweep through, creating a lookup table associating those wires to that paddle
    • Do this for each hodo array and each corresponding chamber.

  • Since the hodoscopes and the chambers are not coplanar, there will be some 'fudge factor' due to the z-displacement. This can be achieved by increasing/decreasing the max/min wire number per paddle.

CODA Event Format Compendium

  • It might be a good idea to start a TWiki page with all of the CODA event formats explicitly written out and defined (and updated as things change).

Hodoscope Rate Estimates

  • After talking to Naomi, she suggests that the first thing that we can do when we even get 10min of beam is estimate (via GMC) the rate in the hodoscopes and then measure it when we get tuning beam.

  • The idea is: estimate how many protons we get in X amount of time when we get tuning beam, scale the GMC gun to that rate, and estimate the rate in, particularly, station 1 hodoscopes.

  • For this, it would be ideal to have a gun run with no target. I'll be contacting BZ to make this when he can.

New FPGA ZS CODA Format

I have received an email from Grass regarding a new data format.

Decoder Speed Analysis

  • The decoder has run with the Hits processed through prepared statements, and the results are interesting:

A Profiling of the decoder running

  • I did not eliminate the sprintf's assembling the v1495 format Hits

  • The event rate, however, decreased to 1.3 Events/s, but I was running several instances at the time. Regardless, this is still a ridiculously low rate.

  • There are a few other bulk inserts being sprintf'd. I'll implement the prepared statement method and then proceed from there.

  • After running gprof on a run of the decoder, I get this:

Flat profile:

Each sample counts as 0.01 seconds.
  %   cumulative   self              self     total           
 time   seconds   seconds    calls  us/call  us/call  name    
 30.00      0.03     0.03                             cli_stmt_execute
 20.00      0.05     0.02                             mysql_stmt_bind_param
 20.00      0.07     0.02                             store_param_int32
 10.00      0.08     0.01   336185     0.03     0.03  eventReimerTDCSQL
 10.00      0.09     0.01                             my_net_read
 10.00      0.10     0.01                             my_realloc_str
  0.00      0.10     0.00    42615     0.00     0.00  eventZSWCTDCSQL
  0.00      0.10     0.00    23675     0.00     0.00  eventv1495TDCSQL
  0.00      0.10     0.00     4735     0.00     2.11  eventSQL
  0.00      0.10     0.00     1665     0.00     0.00  make_tdc_query
  0.00      0.10     0.00      213     0.00     0.00  make_v1495_query
  0.00      0.10     0.00        1     0.00     0.00  createSQL
  0.00      0.10     0.00        1     0.00     0.00  initialize
  0.00      0.10     0.00        1     0.00     0.00  prestartSQL
  0.00      0.10     0.00        1     0.00     0.00  runExists
  0.00      0.10     0.00        1     0.00     0.00  send_final_tdc
  0.00      0.10     0.00        1     0.00     0.00  send_final_v1495

  • The baffling thing is that, in reality, this takes many minutes to run, and this only accounts for 0.10s

  • Going to run it with no mysql submissions next.

No-INSERT Run

  • With all of the sprintf's gone, and the decoder still running slow, it's now time to determine if the slowdown is due to the submission if the bulk INSERT statements or something else.

  • I'm beginning to suspect that it's something else, given the profiling I've seen above, but we shall see.

  • After running the decoder with only the mysql_stmt_execute commands commented out, I'm getting speeds upwards of 1500events/s. This is the bottleneck.

*Two things to try now:

    • Write the data to file and do a LOAD DATA LOCAL INFILE on it, like I do with slowcontrol
    • Take a big insert statement, put it in a file, run it on gat2 talking to the gat2 MySQL server, then talking to the SeaQuel server, and then run the file on the lx2 machine talking to SeaQuel and gat2. Just to get a feel about how fast these things are talking to each other.

  • Running with Mapping vs without Mapping is a factor of 8x in speed difference.

  • Make sure that everything is appropriately indexed in order to optimize the mapping query.

New JyTDC Format

  • From what I can tell, here's the format for the new JyTDC Format
    • 0xe906 f009 - Flag for the JyTDC-type format
    • 0x0000 0950 - BoardID
    • 0x0001 XX01 - The second binary bit = 0 for 32 windows, 1 for 64 windows (32 in this case), from XX is derived the delay time (Td = XX * 2 * 10)
    • 0x2012 XXXX - Bufferword, where XXX is the number of words to follow
    • 0x0000 0XYZ - Trigger time word, where each X, Y, Z correspond to trigger times to be summed up into trigger time (Tt = val(X)+val(Y)+val(Z))
    • Begin data
    • 0x0000 0X00 - X corresponds to a signal time in a specific channel in a time window, Ts = window*10+val(X)
    • 0x88XX 00YY - If there are any number of zero-words, this takes their place. XX Marks the location of this word within the ZS format, YY indicates the first of the zeroes in the non-ZS format.

  • This means that if there is some zero-suppression going on, one must read two of these 0x88 words in order to figure out what the data in between means.

JyTDC Format

  • The prototype decoder function is finished

int windows = 0;
   int channel = 0;
   int Td, Ts, Tt;
   int val;
   int tdcTime;
   int window;
   int word, zsword;
   int i = 0;
   int k;
   int buf = 0;
   int channelFiredBefore[64];

   // Word 1 contains boardID only
   boardID = get_hex_bits(physicsEvent[j],7,4);
   j++;
   
   // Word 2 contains Td and signal window width
   // x2 (resolution) x10 (width of clock)
   Td = (get_hex_bits(physicsEvent[j],3,2))*2*10;
   if(get_bin_bit(physicsEvent[j],1)){
      windows = 64;
   } else {
      windows = 32;
   }
   j++;

   buf = (0xffff & physicsEvent[j]);
   j++;

   // Word 3 contains Tt
   Tt = 0.0;
   for(k=0;k<3;k++){
      val = get_hex_bit(physicsEvent[j],k);
      Tt += trigger_pattern[val];
   }
   j++;

   for (k=0;k<64;k++){ 
      // This flags the window in which a channel has fired previously
      channelFiredBefore[k]=0xBD;
   }

   window = 0;
   zsword = 2;
   word = 0;
   channel = 0;
   word = 0;
   
   while(zsword <= buf){
      
      if((physicsEvent[j] & 0xFF000000)==0x88000000){
         
         for(k=1;k<(buf-zsword)+1;k++){
             // Look for the next 0x88 word, see where the next set
             // of zeroes starts, and count backwards from there
             if ((physicsEvent[j+k] & 0xFF000000)==0x88000000){
            word = (physicsEvent[j+k] & 0x000000FF)-(k-1);
            // Set k to exit the for loop
            k = buf;
             } else if(zsword+k==buf){
            word = (windows*8)-(k-1);
             }
         }
         // Figure out which window and channel to start with
         window=floor(word/8.0);
         channel=(word%8)*8;
      } else {
         for(k=0;k<8;k++){
         
           val=get_hex_bit(physicsEvent[j],k);
           if(val!=0x0){
             if(window!=channelFiredBefore[channel]+1){
            channelFiredBefore[channel]=window;
            Ts = window*10+channel_pattern[val];
            tdcTime = 1280-Td-Ts+Tt;

            tdcCodaID[tdcCount] = codaEventNum;
            tdcRunID[tdcCount] = runNum;
            tdcSpillID[tdcCount] = spillID;
            tdcROC[tdcCount] = ROCID;
            tdcBoardID[tdcCount] = boardID;
            tdcChannelID[tdcCount] = channel;
            tdcStopTime[tdcCount] = tdcTime;
            tdcVmeTime[tdcCount] = vmeTime;
            tdcSignalWidth[tdcCount] = 0;
            tdcCount++;
            
            if (tdcCount == max_tdc_rows) {
                make_tdc_query(conn);
                tdcCount = 0;
            }
             }
             channelFiredBefore[channel]=window;
           }
           channel++;
         }
         word++;
         window=floor(word/8.0);
         channel=(word%8)*8;
      }
      j++;
      zsword++;
   }
   return j;

  • So far, things look like it's working okay, but I'll keep an eye on the values to see that they make sense

  • Perhaps decode a small file to see if Grass' decoder and mine get the same thing.

Spill Event Issue

  • A previous issue of run #56 throwing an error, it was found that Spill Events were containing more information than just the spill information. They also contained trigger information and scalers.

  • One issue is that there can be an arbitrary number of vmeTimes in a spill event. It seems that this has changed so that only ROC 2's vmeTime is considered. I will check this with Grass, because there is no way to expect how many vmeTimes are going to be included.

  • There may be more than one VME Time, and anything beyond that will have

Decoding Speed Issue

The decoder has been slow. Like, 2ev/s slow. After some benchmarking, I'm starting with nothing being submitted. The speed for this is 5000 ev/s. I will slowly enable some submissions to see how they affect the speed.

  • No Submission - Only Event, Spill, and Run table is being submitted. Speed = 5000 ev/s
  • Enable Hit Submission Hits are submitted and mapped. Speed = 1500 ev/s
  • Enable TriggerHit Submission Hits from the v1495 boards is submitted. Speed = 2000 ev/s

October 2012

Run Summary

  • Started new schema called "summary" that could hold several compilations of information
  • Created a RunSummary table that is identical to the form of the Run tables
  • Wrote a small bash script that fills it with all the rows of the Run tables from all the other run_% schemas

#!/bin/bash
for i in {726..2173}
do
  mysql -h e906-gat2.fnal.gov -u root --password='[WITHHELD]' summary -e "INSERT INTO RunSummary SELECT * FROM run_00${i}_R001.Run"
done

Now, one can query to see which sets of runs use a certain type of target, or see which runs used a certain trigger, etc.

CODA Write-Up

Today I began writing up the general structure of these cryptic CODA files so that someone looking to understand it might have a starting resource, as none yet exists.

CODA Write-Up

-- BryanDannowitz - 2012-02-06

Topic attachments
I Attachment History Action Size Date Who Comment
PNGpng profile.png r1 manage 272.6 K 2012-02-24 - 23:05 BryanDannowitz  
PNGpng profile_nosprintf.png r1 manage 234.3 K 2012-02-26 - 17:11 BryanDannowitz  
Edit | Attach | Watch | Print version | History: r32 < r31 < r30 < r29 < r28 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r32 - 2012-10-19 - BryanDannowitz
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2018 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback