Two sensors per Layer? // Overlapping volumes // sensor excess

Dear all,

I was wondering whether there is an easy way to implement two sensors next to each other (same z-pos. different x-pos.) inside a support structure as a single layer.
As in Allpix2 one can only define one sensor per layer, I had the following idea:

  1. Create one layer with support structures and cables etc.
  2. Add a second layer without support structures and position this second layer at the same z-position like the first layer but with a shift in x-direction, so that the sensors are placed next to each other

This setup leads to an error of overlapping volumes.
Because of this issue I contacted Paul Schuetze, who is part of the Allpix developer team, offering me really awesome support. Thank you again!

The Issue is, that Geant4 creates a wrapper volume around everything defined as a layer and requires the wrapper volume to always be a box. So a layer defined with support structures will be placed by GeometryBuilder inside of a wrapper volume. Creating a second layer which is positioned inside of the wrapper volume from the first layer will lead an overlap of volumes.
GeometryBuilder creates this wrapper volume as small as possible but as large as required. However, this means if one adds to a sensor a larger chip/support underneath it, one will end up with a wrapper that also covers the volume where one intend to place a second sensor. This results in the overlap…

Paul suggested to place two sensors next to each other, so that the wrapper dimension of each layer will be of the same size as the sensors. And then adding the support/cables in form of passive material.
This opened an other issue for me. Placing two sensors next to each other, Allpix gives an error message that wrapper volumes are overlapping.
Maybe I am wrong but I think that has something to do with the sensor excess one can define in Allpix.

A sensor defined like this:

type = "monolithic"
number_of_pixels = 512 1024
pixel_size = 29.24um 26.88um
implant_size = 3um 3um
chip_thickness = 5um
sensor_thickness = 50um
sensor_excess_right = 50um
sensor_excess_left = 1208um
sensor_excess_top = 30um
sensor_excess_bottom = 30um

has the following dimensions (Info from Geant4/allpix):

Sensor dimensions:
(16.2289mm,27.5851mm,50um)
Wrapper dimensions of model:
(17.3869mm,27.5851mm,55um)
Chip dimensions:
(16.2289mm,27.5851mm,5um)

Just focussing on the x-component I would naively think the dimensions should be as following:

X-dimension:
Sensor dimension should be:
512x29,24um = 14,97088mm
In fact Allpix adds the sensor excess left and right
512x29,24um+1208um+50um = 16,22888mm
Wrapper dimension should be
512x29,24um+1208um+50um = 16,22888mm
In fact Allpix adds two times excess left
512x29,24um+1208um+1208um = 17,38688mm

Placing two sensors next to each other would need a shift greater than the sensor size with excess and the gap between two sensors would be huge.
Maybe I am doing something wrong here or do not understand something in the right way. I hope you can help me with this issue.

Any advice would be highly appreciated!
Looking forward to your answer.

Cheers,
Tim

Dear @trogosch

thanks for your report. The culprit is this code here:

where we try to calculate the total model size from all the different components belonging to it. Probably in order to simplify things, e.g. for rotations of the model, the wrapper seems to be calculated symmetrically around the rotation center of the model. This means the longest distance in any dimension will define the wrapper size.

This is of course not really nice and we should look into how to solve this in a better way.

A minor comment on the side: What I gather from discussing with @pschutze your model consists of two ALPIDE sensors facing each other. The long sensor_excess_left most likely represents the periphery of your chip. In this case, your pixel (0,0) would be in the corner you would refer to as “upper left” which is relatively unusual. May I suggest to define your sensor such that you will not have to deal with coordinate transformations later on? This would mean placing the periphery at the sensor “bottom” end - but this of course doesn’t solve your issue.

Let me look into this with @pschutze next week.

All the best,
Simon

Dear @simonspa,

thank you for your reply and the comment on my setup!

Yes, I think so too, that DetectorModel takes the maximum value of the excess left and right twice to calculate the wrapper dimension. But even with a sensor which is not rotated this seems to be the case.

Indeed my detector model consists of two ALPIDE sensors facing each other and the large sensor excess represents the periphery of the chip.
I am not quite sure whether I understand what you mean by my “pixel (0,0) would be in the corner”.
Do you mean the first chip or the rotated one?
I prepared some slides to have a proper basis we can refer to, when talking about something. I hope this makes things easier.
Attached you can find these slides, showing the setup. In the last slides I tried to visualise the Pixel-Matrix as I think it is defined with my setup right now. In this case the first sensor would have the pixel (0,0) in the bottom left and the second sensor (rotated) would have the pixel (0,0) at the top right. 201126_TimR_ALLPIX1.6GeometryErrorReport.pdf (2.4 MB)

Looking forward to your answer.

All the best,
Tim

Hi @trogosch

it looks like we really have to “in den sauren Apfel beißen” and have to check how to properly define device rotations when the rotation center does not coincide with the wrapper geometrical center - because that’s the issue.

Concering your pixel enumeration: the ALPIDE asic will report you a certain pixel address for each pixel struck. It is the convention to start with the pixel closest to the periphery, and leftmost when periphery points downwards. In your case this would be the pixels at the position where the “511” appears in your drawing. So I would suggest to adapt your local coordinate system accordingly, because then you will get a 1:1 match between simulated and measured pixel hits.

/Simon

Just saw the very last slide - interesting that in your testbeam data rows seem to be inverted. Well, then your definition makes sense I guess… :slightly_smiling_face:

While we’re diving into G4 and rotations (again :slight_smile: ), I have one more minor comment: Slide 10 of this presentation indicates that the pixel pitch measuring 29.24 um is in the direction where you have 1024 units. In your model definition, this is inverted.

Hi @simonspa and @pschutze,

Many thanks for the answer. I hope “der saure Apfel” can be solved without too many trouble.

@pschutze , thanks for the comment! You are right, I stupidly mixed up the pixel pitches in x and y.

Cheers,
Tim