Spectrum Recorder PCIE Card?

 Posts: 96
 Joined: Sat Oct 20, 2018 10:01 am
 Real name: Harald Consul
Re: Spectrum Recorder PCIE Card?
Thank you guys. All those pitfalls in the electronic pathway of the analog signal beginning with the PMT and ending with analog preprocessor part before the A/D converter have been very insightful.
I have figured out, that 50 samples/ observations may be a good start for applying mixed distribution models, as 30 observations is the bottom to test a normal distribution. As the amplitude is about 20 nanoseconds long, that would make 2.5G Sample/s.
However, at the moment 2.5 MSample/s is the maximum of data acquisition cards (DAQ cards) at Ebay. Thus, I am postponing the acquisition of the DAQ card to wait for higher sampling rates in 1 or 2 years. For bridging I have to use my 200MHZ digi osci, meanwhile, which however is pretty limited in the total number of samples. But I am not willing to pay 5000 USD for a 2.5 GS/s card.
I have figured out, that 50 samples/ observations may be a good start for applying mixed distribution models, as 30 observations is the bottom to test a normal distribution. As the amplitude is about 20 nanoseconds long, that would make 2.5G Sample/s.
However, at the moment 2.5 MSample/s is the maximum of data acquisition cards (DAQ cards) at Ebay. Thus, I am postponing the acquisition of the DAQ card to wait for higher sampling rates in 1 or 2 years. For bridging I have to use my 200MHZ digi osci, meanwhile, which however is pretty limited in the total number of samples. But I am not willing to pay 5000 USD for a 2.5 GS/s card.
 Rich Feldman
 Posts: 1171
 Joined: Mon Dec 21, 2009 11:59 pm
 Real name: Rich Feldman
 Location: Santa Clara County, CA, USA
Re: Spectrum Recorder PCIE Card?
Sorry, can't stay away.
Harald, I think you are missing the main point. Let's try again using one specific example.
Suppose your 20 ns pulses (with arbitrary alignment to sampling clock, and arbitrarily overlapping neighbors) are smooth and have minimum risetimes on the order of 2 ns, hypothetically. So that after a linear, continuoustime lowpass filter that stops everything above 250 MHz, very little was lost. Example continues with the assumption that in continuoustime filtered waveform, you could accurately get the location and size of each pulse, even when they partly overlap.
Now you want to do the processing using discretetime methods (and discrete voltage resolution, but that's an independent detail).
You want 50 points per 20 ns pulse, which is 2.5 Gs/s (0.4 ns sample spacing). OK so far.
I claim that capturing 10 points per pulse ( 0.5 Gs/s, 2.0 ns spacing) in the filtered signal is enough.
Because you can compute the missing 40 points, without error, by proper multipoint interpolation from the sparser samples. You could interpolate to a time resolution of 500 points per pulse if you wanted  practically continuoustime. It works because of the properties of bandlimited signals. Wish there were time to make an illustrated demonstration, to overcome intuitive resistance to the concept.
Harald, I think you are missing the main point. Let's try again using one specific example.
Suppose your 20 ns pulses (with arbitrary alignment to sampling clock, and arbitrarily overlapping neighbors) are smooth and have minimum risetimes on the order of 2 ns, hypothetically. So that after a linear, continuoustime lowpass filter that stops everything above 250 MHz, very little was lost. Example continues with the assumption that in continuoustime filtered waveform, you could accurately get the location and size of each pulse, even when they partly overlap.
Now you want to do the processing using discretetime methods (and discrete voltage resolution, but that's an independent detail).
You want 50 points per 20 ns pulse, which is 2.5 Gs/s (0.4 ns sample spacing). OK so far.
I claim that capturing 10 points per pulse ( 0.5 Gs/s, 2.0 ns spacing) in the filtered signal is enough.
Because you can compute the missing 40 points, without error, by proper multipoint interpolation from the sparser samples. You could interpolate to a time resolution of 500 points per pulse if you wanted  practically continuoustime. It works because of the properties of bandlimited signals. Wish there were time to make an illustrated demonstration, to overcome intuitive resistance to the concept.
All models are wrong; some models are useful.  George Box

 Posts: 96
 Joined: Sat Oct 20, 2018 10:01 am
 Real name: Harald Consul
Re: Spectrum Recorder PCIE Card?
Thanks Rich!
First of all it is absolutely common when two or more scientists from different disciplines talk together in some kind of interdisciplinary communication, that they do not understand each other fully, simply due to different education and different wording. This sometimes requires a lot of patience on both sides.
When a mixed distribution model is applied to distinct between a multiple particle amplitude and single particle amplitude this is only based on the shape respectively the curvature of the signal amplitude.
Any interpolation between two samples will not exactly meet the true curvature of the amplitude. On the contrary the interpolation always pushes the curvature towards the one, that has been assumed for the interpolation. As any interpolation must be based on some curvature assumption, the interpolation can produce further data points, but no additional information about the true distribution/ true curvature of the signal.
Thus, the only possibility to decompose a piled up multi particle signal into singular particle events would be a better mathematical approach than the one I mentioned, that would require less than 50 samples/ data points per amplitude.
First of all it is absolutely common when two or more scientists from different disciplines talk together in some kind of interdisciplinary communication, that they do not understand each other fully, simply due to different education and different wording. This sometimes requires a lot of patience on both sides.
When a mixed distribution model is applied to distinct between a multiple particle amplitude and single particle amplitude this is only based on the shape respectively the curvature of the signal amplitude.
Any interpolation between two samples will not exactly meet the true curvature of the amplitude. On the contrary the interpolation always pushes the curvature towards the one, that has been assumed for the interpolation. As any interpolation must be based on some curvature assumption, the interpolation can produce further data points, but no additional information about the true distribution/ true curvature of the signal.
Thus, the only possibility to decompose a piled up multi particle signal into singular particle events would be a better mathematical approach than the one I mentioned, that would require less than 50 samples/ data points per amplitude.
 Rich Feldman
 Posts: 1171
 Joined: Mon Dec 21, 2009 11:59 pm
 Real name: Rich Feldman
 Location: Santa Clara County, CA, USA
Re: Spectrum Recorder PCIE Card?
Harald,
Would you mind sending, as lists of numbers, one or more realisticlooking PMT pulses with 0.4 ns between samples? You can draw them yourself, and they could even be piecewise linear if that simplifies your task.
I want to whip up, or at least threaten to whip up, a numerical demonstration.
We will resample at 1/5 of the rate, then see how sin(x)/x interpolation perfectly reconstructs all points along curves between the "sparse" samples.
Of course we will put an antialiasing filter before the sparse sampling. All interpolated points will exactly match the fullrate signal after the filter.
You need to be content with the fidelity of the fullrate filtered pulses, which is like having front end electronics with a useful but not excessive analog bandwidth. Let's see how that looks when the original pulses are designed by you. Piecewiselinear shapes would get their sharp corners rounded by the filter, and excessively fast risetimes would be knocked down a bit.
The demo will have knobs to choose the time shift between two superimposed PMT pulses, and the phase of the sparse sampling.
Addition of the two timeshifted pulses can happen before or after the filtering, before or after the sparse sampling, and before or after the interpolation. We will see that they all give the same answer.
Respectfully,
Rich
Would you mind sending, as lists of numbers, one or more realisticlooking PMT pulses with 0.4 ns between samples? You can draw them yourself, and they could even be piecewise linear if that simplifies your task.
I want to whip up, or at least threaten to whip up, a numerical demonstration.
We will resample at 1/5 of the rate, then see how sin(x)/x interpolation perfectly reconstructs all points along curves between the "sparse" samples.
Of course we will put an antialiasing filter before the sparse sampling. All interpolated points will exactly match the fullrate signal after the filter.
You need to be content with the fidelity of the fullrate filtered pulses, which is like having front end electronics with a useful but not excessive analog bandwidth. Let's see how that looks when the original pulses are designed by you. Piecewiselinear shapes would get their sharp corners rounded by the filter, and excessively fast risetimes would be knocked down a bit.
The demo will have knobs to choose the time shift between two superimposed PMT pulses, and the phase of the sparse sampling.
Addition of the two timeshifted pulses can happen before or after the filtering, before or after the sparse sampling, and before or after the interpolation. We will see that they all give the same answer.
Respectfully,
Rich
All models are wrong; some models are useful.  George Box

 Posts: 96
 Joined: Sat Oct 20, 2018 10:01 am
 Real name: Harald Consul
Re: Spectrum Recorder PCIE Card?
Rich, currently I do not have any real scintillation data, as I am still working on figuring out the electrical pin allocation of my "brand new" second hand scintillator tube.
Compare
viewtopic.php?f=20&t=12635&p=82261#p82261
viewtopic.php?f=13&t=12581&start=10
Compare
viewtopic.php?f=20&t=12635&p=82261#p82261
viewtopic.php?f=13&t=12581&start=10

 Posts: 66
 Joined: Wed Mar 08, 2017 12:13 am
 Real name:
 Location: SoCal
Re: Spectrum Recorder PCIE Card?
One way to separate peaks is with deconvolution.
Harald, do you just want to precisely measure the number of particles or do you also want to measure the energy of the particle?
If you only want to count the particles it seems to me that you wouldn't need such a high sample rate. You would only need enough to distinguish between different types.
Harald, do you just want to precisely measure the number of particles or do you also want to measure the energy of the particle?
If you only want to count the particles it seems to me that you wouldn't need such a high sample rate. You would only need enough to distinguish between different types.

 Posts: 96
 Joined: Sat Oct 20, 2018 10:01 am
 Real name: Harald Consul
Re: Spectrum Recorder PCIE Card?
I want to count them seperately by energy.
E.g.:
#127 x 2.3 keV
#245 x 3.5 keV
# 87 x 7.1 keV
Deconvolution looks good. There are three RPackages for deconvolution:
 dtangle
 deconvolveR
 decon
I do not have any experience in deconvultion, yet. How many samples (measurements) would deconvolution require per amplitude?
E.g.:
#127 x 2.3 keV
#245 x 3.5 keV
# 87 x 7.1 keV
Deconvolution looks good. There are three RPackages for deconvolution:
 dtangle
 deconvolveR
 decon
I do not have any experience in deconvultion, yet. How many samples (measurements) would deconvolution require per amplitude?

 Posts: 66
 Joined: Wed Mar 08, 2017 12:13 am
 Real name:
 Location: SoCal
Re: Spectrum Recorder PCIE Card?
When the timedomain function of each type of pulse, F(x), is known you then use FFT math to convert it to its frequencydomain representation, G(x).
Then when you measure a signal with multiple peaks squashed together you can use deconvolution on the G(x)'s to separate the peaks.
The number of samples needed will depend on how similar each pulse type is. You need a greater sample rate to distinguish between 2.3 keV and 3.5 keV then you would need between 2.3 keV and 7.1 keV. We need to know what each pulse looks like first to give a definitive answer.
Then when you measure a signal with multiple peaks squashed together you can use deconvolution on the G(x)'s to separate the peaks.
The number of samples needed will depend on how similar each pulse type is. You need a greater sample rate to distinguish between 2.3 keV and 3.5 keV then you would need between 2.3 keV and 7.1 keV. We need to know what each pulse looks like first to give a definitive answer.

 Posts: 59
 Joined: Sun Feb 19, 2017 4:32 pm
 Real name: Chris Mullins
 Location: Shenandoah Valley, VA
 Contact:
Re: Spectrum Recorder PCIE Card?
Harald,
I'm not sure those deconvolution functions in R are what you are looking for. If the pulse shape is dominated more by the impulse response of the amplifier, then if you know the amplifier transfer function you can undo the timedomain effect of the amp by a deconvolution, which can be done in the time or frequency domain. This would restore the original, much narrower pulses that excited the amp, and presumably would undo the overlap if there are multiple closely spaced pulses. This is a time series analysis approach to undo a (presumably) linear transformation that smeared the original narrow pulse into a much wider one, not a statistical approach.
Like Rich said, if the amplifier naturally bandlimits the original pulse to say, 250 MHz (which it does if it's linear, and is limiting the rise time to ~2ns), then there is no loss of information in sampling at ~500Ms/s. If you sample faster, you're not gaining any information in a signal processing sense, because that information has already been removed by the bandlimiting amplifier. The deconvolution technique would work just as well at 500Ms/s as it would at 5Gs/s or even 50Gs/s
I think there's some confusion here because some of these terms used in discrete time series signal processing seem to be used in statistical analysis, but with different meanings. E.g. convolution, sampling, etc. Those R deconvolution functions don't seem to be related at all to the timeseries deconvolution John was talking about (e.g. as demonstrated here with MATLAB: https://terpconnect.umd.edu/~toh/spectr ... ution.html)
An electrical engineering/signal processing approach would be to simulate this in something like MATLAB (or a free analog, Octave), which is geared more towards timeseries analysis, and develop your basic algorithm there. Then, you could use a function generator with dual outputs to generate narrow pulses (that sometimes are closely spaced enough that the amp output would overlap), combine them, run them through a typical PMT amp filter (or equivalent), sample with a digital scope, and try your algorithm on it. That would give you some repeatable data to work with before you have your real scintillator working with real sources.
I'm not sure those deconvolution functions in R are what you are looking for. If the pulse shape is dominated more by the impulse response of the amplifier, then if you know the amplifier transfer function you can undo the timedomain effect of the amp by a deconvolution, which can be done in the time or frequency domain. This would restore the original, much narrower pulses that excited the amp, and presumably would undo the overlap if there are multiple closely spaced pulses. This is a time series analysis approach to undo a (presumably) linear transformation that smeared the original narrow pulse into a much wider one, not a statistical approach.
Like Rich said, if the amplifier naturally bandlimits the original pulse to say, 250 MHz (which it does if it's linear, and is limiting the rise time to ~2ns), then there is no loss of information in sampling at ~500Ms/s. If you sample faster, you're not gaining any information in a signal processing sense, because that information has already been removed by the bandlimiting amplifier. The deconvolution technique would work just as well at 500Ms/s as it would at 5Gs/s or even 50Gs/s
I think there's some confusion here because some of these terms used in discrete time series signal processing seem to be used in statistical analysis, but with different meanings. E.g. convolution, sampling, etc. Those R deconvolution functions don't seem to be related at all to the timeseries deconvolution John was talking about (e.g. as demonstrated here with MATLAB: https://terpconnect.umd.edu/~toh/spectr ... ution.html)
An electrical engineering/signal processing approach would be to simulate this in something like MATLAB (or a free analog, Octave), which is geared more towards timeseries analysis, and develop your basic algorithm there. Then, you could use a function generator with dual outputs to generate narrow pulses (that sometimes are closely spaced enough that the amp output would overlap), combine them, run them through a typical PMT amp filter (or equivalent), sample with a digital scope, and try your algorithm on it. That would give you some repeatable data to work with before you have your real scintillator working with real sources.

 Posts: 1477
 Joined: Thu Apr 22, 2004 2:29 am
 Real name: John Futter
 Contact:
Re: Spectrum Recorder PCIE Card?
Again
What Harald wants to do is exactly what an MCA does pulse height into bins
and no a computer or microcontroler is not the way to go if you want to count fast
I have already given the answer
What Harald wants to do is exactly what an MCA does pulse height into bins
and no a computer or microcontroler is not the way to go if you want to count fast
I have already given the answer