-
Notifications
You must be signed in to change notification settings - Fork 1
Use generic workflow to compute time of flight. #170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
4cac974 to
6bf8310
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Were there any modifications here or was the code moved only?
src/ess/nmx/executables.py
Outdated
| lookup_table = compute_lookup_table( | ||
| base_wf=wf, workflow_config=workflow_config, detector_names=detector_names | ||
| ) | ||
| wf[TimeOfFlightLookupTable] = lookup_table |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the workflow need to be modified in-place or can we return a copy of the workflow?
The latter would feel safer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is for cacheing the lookup table back in the workflow, so there's no need to copy the workflow I thought.
| nmx_det = tof.Detector(distance=max(ltotal_range), name="detector") | ||
| model = tof.Model(source=source, choppers=[], detectors=[nmx_det]) | ||
| results = model.run() | ||
| events = results["detector"].data.squeeze().flatten(to="event") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that this will start going wrong if choppers are later added to the model, as we need to filter out the events that are blocked by the choppers (they have a boolean mask).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mean if we ever add choppers, we should not set the wavelength range?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No I meant that if you have choppers in the beamline, after doing
events = results["detector"].data.squeeze().flatten(to="event")you also need to do something like
events = events[~events.masks["blocked_by_others"]]to filter out all the events that actually don't make it to the detector.
src/ess/nmx/workflows.py
Outdated
| return merged | ||
|
|
||
|
|
||
| def select_detector_names( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will we ever have files that don't contain the 3 detector panels?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the number of detectors are flexible for NMX.
src/ess/nmx/workflows.py
Outdated
| wf = map_detector_names( | ||
| wf=wf, | ||
| detector_names=detector_names, | ||
| mapped_type=DetectorLtotal[SampleRun], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I get why we are mapping at the DetectorLtotal...
I was expecting to use a Pandas dataframe like in the Sciline docs that would have a column of type NeXusDetectorName and names for detectors 0,1,2?
Edit: on second thought, I think I now understand that we are doing this to get the range of Ltotal for the run by computing the min and max for all pixels.
I would vote for simplifying the whole thing: you know really what the range of Ltotal will be, approximately. One rule is that you know the panels are not going to move outside the cave. So we can just set a hard-coded very wide range. It is cheap to compute a table, even for a large range.
In addition, will NMX us data from monitors? If those need to be converted to tof, we need to extend the table to also cover those.
I would then get rid of the mapping here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah Johannes said the same thing about it so we can just make the ltotal range configurable I guess...?
| return wf.compute(TimeOfFlightLookupTable) | ||
|
|
||
|
|
||
| __all__ = ['NMXWorkflow'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think generally I would separate things more.
I would make one workflow where we get the lut from a file, and another where we compute it on-the-fly (finding good names for the two).
When using the first one, all we need to set is the lut filename.
When using the second, we need to set wavelength ranges, seed, number of neutrons etc...
I think it would increase simplicity. Although did not think this through that much, and maybe it does not play well with the command-line executables approach...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now that we don't calculate LtotalRange from the files, it's not that complicated anymore.
I think we would like the single interface...
src/ess/nmx/workflows.py
Outdated
| file_path: Filename[SampleRun], | ||
| ) -> Position[snx.NXcrystal, SampleRun]: | ||
| """Temporary provider to retrieve crystal rotation from Nexus file.""" | ||
| with snx.File(file_path) as file: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use the load_from_path from essreduce here? Would it simplify some things?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeeees...! I updated it. Thank you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we might not have to merge the Generic workflow and the LookupTableWorkflow in workflows.py.
Having them separate might be simpler because we can remove the logic for mapping and merging workflows etc.
If I understand correctly the workflows are merged to use the Ltotal from the files to decide the limits of the lookup table, but that is not done in any of the other instruments (as far as I am aware) and it might not be necessary here either.
But you decide what is best.
|
@jokasimr @nvaytet I updated the workflow to simply set the ltotal range from the configuration in the last commit Thank you for the reviews...! |
| tof_simulation_min_ltotal: float = Field( | ||
| title="TOF Simulation Minimum Ltotal", | ||
| description="Minimum total flight path for TOF simulation in meters.", | ||
| default=150.0, | ||
| ) | ||
| tof_simulation_max_ltotal: float = Field( | ||
| title="TOF Simulation Maximum Ltotal", | ||
| description="Maximum total flight path for TOF simulation in meters.", | ||
| default=170.0, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This range is from the mcstas simulation that Aaron was working on right now.
I gave margin of ~10 meters from actual ltotal range.
src/ess/nmx/workflows.py
Outdated
| detector_names = sorted(set(detector_names)) | ||
| return [detector_names[i_d] for i_d in detector_ids] | ||
| else: | ||
| return ['detector_panel_0', 'detector_panel_1', 'detector_panel_2'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just hard code ['detector_panel_0', 'detector_panel_1', 'detector_panel_2', 'detector_panel_3', ...]?
src/ess/nmx/executables.py
Outdated
| ) | ||
| detector_metas[detector_name] = detector_meta | ||
| # Binning into 1 bin and getting final tof bin edges later. | ||
| tof_das[detector_name] = results[TofDetector[SampleRun]].bin(tof=1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider using results[TofDetector[SampleRun]].bins.coords['tof'].min() and .max() to get the tof range, it may be cheaper than binning (which may re-order all the events).
| ) | ||
| tof_das = sc.DataGroup() | ||
| detector_metas = sc.DataGroup() | ||
| for detector_name in detector_names: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider doing this part with mapping the workflow over detector names?
src/ess/nmx/executables.py
Outdated
| base_wf = NMXWorkflow() | ||
|
|
||
| # Insert parameters and cache intermediate results | ||
| base_wf[Filename] = input_file_path |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| base_wf[Filename] = input_file_path | |
| base_wf[Filename[SampleRun]] = input_file_path |
| tof_das[detector_name] = results[TofDetector[SampleRun]] | ||
|
|
||
| # Make tof bin edges covering all detectors | ||
| # TODO: Allow user to specify tof binning parameters from config |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I opened #176
| return logging.getLogger(__name__).info | ||
|
|
||
|
|
||
| def _finalize_tof_bin_edges( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function doesn't seem to be used anymore...
| hist = _retrieve_one_hist(reduction(config=reduction_config)) | ||
|
|
||
| # Check that the number of time bins is as expected. | ||
| assert len(hist.coords['tof']) == 21 # nbins + 1 edges |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should have tests that check both cases using tof LUT filename, and computing the LUT on the fly.
@aaronfinke
This is the first step of generic workflow migration.
There are some breaking changes with the command line interface.
Can you please tell me if it works with the file you have?
Here is the command line with the options that work as expected:
essnmx-reduce \ --input-file PATH_TO_THE_INPUT_NEXUS_FILE \ --nbins 50 \ --output-file test_output.h5 \ --verboseWhat's NOT included in this PR