Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Load resolution shape from data file. #261

Open
pkienzle opened this issue Feb 26, 2025 · 5 comments
Open

Load resolution shape from data file. #261

pkienzle opened this issue Feb 26, 2025 · 5 comments

Comments

@pkienzle
Copy link
Member

For certain instruments (ToF, CANDOR) the reduced data after stitching multiple wavelengths is better described by a uniform ΔQ rather than a truncated normal. This information should be stored with the reduced file metadata and loaded with the file rather than requiring the user to specify it during load.

Make sure the webview GUI allows the form of the ΔQ resolution to be visible and editable on the page.

Note: the default is "uniform" in the BaseProbe dataclass but "normal" in the load4 function. Replace "uniform" with "normal" in the BaseProbe, and set the default resolution to None to use this default.

@andyfaff
Copy link
Contributor

For certain instruments (ToF, CANDOR) the reduced data after stitching multiple wavelengths is better described by a uniform ΔQ rather than a truncated normal

This is interesting. Can you point to any information where I can learn more?

@pkienzle
Copy link
Member Author

For the CANDOR instrument we have tiny slits on the detector, so small Δθ, and analyzer blades with narrow Δλ, so the ΔQ for each point is tiny. In particular, it is much smaller than the bin size we are using when we are stitching the data from about 1500 points down to 150. After normalizing by incident intensity we are left with a bunch of narrow peaks within a fixed width window which we average according to a mixture distribution.

See comments in our reduction code here

The way we are running CANDOR with many shorter measurements at a large number of angles gives us a lot more uniformity within each bin. If we instead measured fewer angles for longer to minimize the overlap region then Q distribution in bins in the overlap region would be too spiky to model as either uniform or Gaussian. I'm not sure what distribution is best for tabletop X-ray sources.

The tricks we are doing for sample resolution broadening and angle offset correction only work because CANDOR has a 0.2 nm range of wavelengths. Simulations on an ISIS style instrument with 1.0+ nm range show they don't work.

@andyfaff
Copy link
Contributor

@pkienzle thank you for the information.

@andyfaff
Copy link
Contributor

How can one use refl1d to fit the uniform dQ distribution?

@pkienzle
Copy link
Member Author

Set probe.resolution = "uniform". This then calls convolve_uniform:

def convolve_uniform(xi, yi, x, dx, y):
left_index = 0
N_xi = len(xi)
N_x = len(x)
for k in prange(N_x):
x_k = x[k]
# Convert 1-sigma width to 1/2 width of the region
limit = dx[k] * root_12_over_2
# print(f"point {x_k} +/- {limit}")
# Find integration limits, bound by the range of the data
left, right = max(x_k - limit, xi[0]), min(x_k + limit, xi[-1])
if right < left:
# Convolution does not overlap data range.
y[k] = 0.0
continue
# Find the starting point for the convolution by first scanning
# forward until we reach the next point greater than the limit
# (we might already be there if the next output point has wider
# resolution than the current point), then scanning backwards to
# get to the last point before the limit. Make sure we have at
# least one interval so that we don't have to check edge cases
# later.
while left_index < N_xi - 2 and xi[left_index] < left:
left_index += 1
while left_index > 0 and xi[left_index] > left:
left_index -= 1
# Set the first interval.
total = 0.0
right_index = left_index + 1
x1, y1 = xi[left_index], yi[left_index]
x2, y2 = xi[right_index], yi[right_index]
# Subtract the excess from left interval before the left edge.
# print(f" left {left} in {(x1, y1)}, {(x2, y2)}")
if x1 < left:
# Subtract the area of the rectangle from (x1, 0) to (left, y1)
# plus 1/2 the rectangle from (x1, y1) to (left, y'),
# where y' is y value where the line (x1, y1) to (x2, y2)
# intersects x=left. This can be computed as follows:
# offset = left - x1
# slope = (y2 - y1)/(x2 - x1)
# yleft = y1 + slope*offset
# area = offset * y1 + offset * (yleft-y1)/2
# It can be simplified to the following:
# area = offset * (y1 + slope*offset/2)
offset = left - x1
slope = (y2 - y1) / (x2 - x1)
area = offset * (y1 + 0.5 * slope * offset)
total -= area
# print(f" left correction {area}")
# Do trapezoidal integration up to and including the end interval
while right_index < N_xi - 1 and x2 < right:
# Add the current interval if it isn't empty
if x1 != x2:
area = 0.5 * (y1 + y2) * (x2 - x1)
total += area
# print(f" adding {(x1,y1)}, {(x2, y2)} as {area}")
# Move to the next interval
right_index += 1
x1, y1, x2, y2 = x2, y2, xi[right_index], yi[right_index]
if x1 != x2:
area = 0.5 * (y1 + y2) * (x2 - x1)
total += area
# print(f" adding final {(x1,y1)}, {(x2, y2)} as {area}")
# Subtract the excess from the right interval after the right edge.
# print(f" right {right} in {(x1, y1)}, {(x2, y2)}")
if x2 > right:
# Expression for area to subtract using rectangles is as follows:
# offset = x2 - right
# slope = (y2 - y1)/(x2 - x1)
# yright = y2 - slope*offset
# area = -(offset * yright + offset * (y2-yright)/2)
# It can be simplified to the following:
# area = -offset * (y2 - slope*offset/2)
offset = x2 - right
slope = (y2 - y1) / (x2 - x1)
area = offset * (y2 - 0.5 * slope * offset)
total -= area
# print(f" right correction {area}")
# Normalize by interval length
if left < right:
# print(f" normalize by length {right} - {left}")
y[k] = total / (right - left)
elif x1 < x2:
# If dx = 0 using the value interpolated at x (with left=right=x).
# print(f" dirac delta at {left} = {right} in {(x1, y1)}, {(x2, y2)}")
offset = left - x1
slope = (y2 - y1) / (x2 - x1)
y[k] = y1 + slope * offset
else:
# At an empty interval in the theory function. Average the y.
# print(f" empty interval with {left} = {right} in {(x1, y1)}, {(x2, y2)}")
y[k] = 0.5 * (y1 + y2)

This is wrapped in a numba jit. Brian may be able to tell you how well it performs relative to the C implementation here:

convolve_uniform(size_t Nin, const double xin[], const double yin[],
size_t Nout, const double x[], const double dx[], double y[])
{
int left_index = 0;
size_t k = 0;
for (size_t out=0; out < Nout; out++) {
const double xo = x[out];
const double sigma = dx[out];
const double limit = sigma * sqrt(3.0);
const double left = ((xo - limit) > xin[0]) ? (xo - limit) : xin[0];
const double right = ((xo + limit) < xin[Nin-1]) ? (xo + limit) : xin[Nin-1];
if (right < left) {
/* Convolution does not overlap data range. */
y[k] = 0.;
k++;
continue;
}
/* Find the starting point for the convolution by first scanning
* forward until we reach the next point greater than the limit
* (we might already be there if the next output point has wider
* resolution than the current point), then scanning backwards to
* get to the last point before the limit. Make sure we have at
* least one interval so that we don't have to check edge cases
* later.
*/
while (left_index < Nin-2 && xin[left_index] < left) left_index++;
while (left_index > 0 && xin[left_index] > left) left_index--;
/* Set the first interval. */
double total = 0.;
int right_index = left_index + 1;
double x1 = xin[left_index], y1 = yin[left_index];
double x2 = xin[right_index], y2 = yin[right_index];
/* Subtract the excess from left interval before the left edge. */
if (x1 < left) {
double offset = left - x1;
double slope = (y2 - y1)/(x2 - x1);
double area = offset * (y1 + 0.5*slope*offset);
total -= area;
}
/* Do trapezoidal integration up to and including the end interval. */
while (right_index < Nin-1 && x2 < right) {
if (x1 != x2) {
double area = 0.5*(y1 + y2)*(x2 - x1);
total += area;
}
right_index++;
x1 = x2;
y1 = y2;
x2 = xin[right_index];
y2 = yin[right_index];
}
if (x1 != x2) {
double area = 0.5*(y1 + y2)*(x2 - x1);
total += area;
}
/* Subtract the excess from the right interval after the right edge
* and normalize by interval length.
*/
if (x2 > right) {
double offset = x2 - right;
double slope = (y2 - y1)/(x2 - x1);
double area = offset * (y2 - 0.5*slope*offset);
total -= area;
}
if (left < right) {
y[k] = total / (right - left);
} else if (x1 < x2) {
double offset = left - x1;
double slope = (y2 - y1)/(x2 - x1);
y[k] = y1 + slope*offset;
} else {
y[k] = 0.5*(y1 + y2);
}
k++;
}
}

Note that we are not modifying the set of Q points used for the convolution when the form of the resolution function is changed. We should be ensuring that we have enough calculated points to support the ΔQ at each measured point. For a normal distribution this requires a wider window with points concentrated at the center. For a uniform distribution they should be sampled uniformly within the resolution width. The method probe.oversample assumes a normal distribution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants