Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

qceff dartlab #740

Merged
merged 52 commits into from
Jan 23, 2025
Merged

qceff dartlab #740

merged 52 commits into from
Jan 23, 2025

Conversation

hkershaw-brown
Copy link
Member

@hkershaw-brown hkershaw-brown commented Sep 25, 2024

Description:

DARTLAB qceff from Jceff

Fixes issue

Fixes #739
Note the slides are not in this pull request (yet!)

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Documentation update

Documentation changes needed?

  • My change requires a change to the documentation.
    • I have updated the documentation accordingly.

Tests

Run by a group of workshop attendees at AGOS 2024

Checklist for merging

  • Updated changelog entry
  • Documentation updated
  • Update conf.py

Checklist for release

  • Merge into main
  • Create release from the main branch with appropriate tag
  • Delete feature-branch

Testing Datasets

  • Dataset needed for testing available upon request
  • Dataset download instructions included
  • No dataset needed

jlaucar added 30 commits May 1, 2024 10:33
THis involved adding optional arguments to return information about the weights
in the prior and posterior bins to obs_increment_rhf.m
…ne the

pdf for a bounded normal rank histogram given the ensemble members and
information about the tail normals.
distribution plot is updated whenever the buttons selecte a new
distribution.
…instead

of using hard coded bounds from oned_ensemble's axes.
…se where

bnrh is bounded on the left at 0. oned_ensemble is believed to be working
for all cases with the improved plotting. bounded_oned_ensemble is believed
to be functioning correctly except for the case of applying inflation with
the gamma or BNRH distributions.
in which the inflated prior mean value was not being correctly computed.
This was also changed in oned_ensemble.m where it did not produce a
wrong answer (since inflated mean is always the same as prior mean)
but could have led to a misunderstanding when looking at the code.
These are directly modified versions of the fortran routines of the same names.
They include slightly less error checking at the moment and should be further
polished for use in actual matlab assimilations as opposed to graphical demonstrations.
These have been tested for direct inversion and appear to be bitwise for the
small cases tested.
…unction

weighted_norm_inv.m out of obs_increment_rhf.m since it is needed for the
recently added cdf and inverse cdf of the bnrh. All inflation variants now work
in bounded_oned_ensemble.
exisiting guis can use it without modification.
an additional plot box for the transformed ensemble and new button
for the unobserved variable transform.
that observed variable is nonnegative. Put in appropriate error handling
if a negative value is clicked. Fixed the error messages for the observation
and obs error sd specification.
…ies.

The PPI space plots are now appropriately cleared in the same fashion
as the standard state space.
…rved

variable. Added PPI transforms for the observed variable. Options
limited to normal and unbounded RH since the observed variable is unbounded
in this demonstration.
distribution. Made correct use of SCALAR function for transforming
observed posterior with BNRH.
…nd the

existing oned_ensemble.m and twod_ensemble.m
… from

the update_ens callback. Updates the observation at each cycle and plots
the updated obs, prior and posterior.
the continuous distributions with solid curves. Added approximate
fitted continuous distributions for the EnKF for comparison.
@hkershaw-brown hkershaw-brown added the blocked waiting on something else label Jan 14, 2025
@jlaucar
Copy link
Contributor

jlaucar commented Jan 15, 2025 via email

@hkershaw-brown
Copy link
Member Author

not with Helen, with the team! 🤦‍♀️

@hkershaw-brown
Copy link
Member Author

section 4 is a different size (widescreen)

@jlaucar
Copy link
Contributor

jlaucar commented Jan 21, 2025 via email

@hkershaw-brown
Copy link
Member Author

@jlaucar put the power points in this google drive folder:
https://drive.google.com/drive/folders/1F8XxwmYySjJVwYezZPHJRz_qjedR9X6q

@hkershaw-brown
Copy link
Member Author

added auto converted slides to https://drive.google.com/drive/folders/1F8XxwmYySjJVwYezZPHJRz_qjedR9X6q as a backup

@jlaucar
Copy link
Contributor

jlaucar commented Jan 21, 2025

Here are responses to Kevin's comments on all 5 sections. These have been implemented and Kevin has reviewed the revisions.

Reponses to Kevins’ comments

Section 1:

I'll stop trying to make all the notation consistent with my limited understanding.
I'm suggesting a few things that would help me understand the notation
in individual slides. There are non-notation suggestions too.

1:2 It would fit the picture better to use a red line at 1 C to represent the ob,
since the axes imply that T_O is happening with 0 probability.
Better yet would be have the star on a number line with no probability axis.
Agreed that the existence of the probability axis could be misinterpreted here. I’ve removed the vertical label from the plot.

1:3 I think the picture is a little misleading. I'd omit the star (or line),
since it's being replaced by a more complete description; the curve.
The star is not being replaced, rather it is being augmented. We deal with observed temperature numbers all the time.

1:4 The likelihood equation would be easier for me to read if it were
L(T_True) = P(T_O | T_True)
This tells me that the distribution is a function of T_True, for a value of the parameter T_O.
This would be clarified even more if the x-axis were labeled T_True.
Then I'd change "This is"... to "This is the relative probability of seeing T_O
given the space of possible true temperatures."
The likelihood is a function of T, not of T_True. It has a parameter, T0. It is not a function of T_True, nor is T_True a parameter of the function. The text at the bottom is a correct. Your proposed restatement is actually a description of the observational error distribution. The confusion between the likelihood and the error distribution is widespread and is exacerbated by the fact that the two have the same functional form for a normal distribution. This is the issue that complicates interpretation of the notation in Bishop’s GIGG filter papers.

1:9 The notation here implies to me that this distribution is some kind of inversion of the likelihood.
But I could be missing the point.
The equation would be clearer to me as
E(T_O) = P(T_O | T_true)
and the x-axis would be T_O, with the red * now being T_True.
I'd be even more comfortable with
E(T_O) = P(T_True | T_O).
To me that would be "the relative probability of T_True being the true temperature,
given all of the possible T_O measurements."
I removed the figure and text here and explicitly wrote the probability distribution. You are correct that this is a stochastic ‘inversion’. The likelihood is the transpose of the observation error. This really isn’t that relevant so I’ve removed some of the detail. The real point is that you have to worry about this when using non-Gaussian likelihoods (observation error distributions).
1:12 Is the figure (still) showing T_O (now T_O,1) and its likelihood (or error) distribution?
Figure removed.
1:13 And is it used, with some unspecified information, to create this prior dist.?
The prior is all the information from the first observation, which is the just a likelihood in this case, clarified.
1:18 Do students ever ask why independent random errors is sufficient to
make measurements T_2 independent of T_1?
We aren’t saying that the measurements are independent. Just that the associated errors are independent. In other words, knowing the error of the first observation doesn’t tell you anything about the error of the second. Knowing the value of the first observation might tell you a lot about the value of the second one.
Is it because we've assumed unbiased obs?
Bias would be one form of correlated observation error.
Why is denominator unaffected by this assumption?
I don't know how to interpret P(T_1 | T_2).
The easiest way to understand this is to go back to the derivation of Bayes. B and C being correlated affects the weights.
Since they're independent obs, can I interpret it as P(T_1 , T_2), the probability of seeing T_1 and T_2 ?
Yes, this is the probability of T conditioned on the first observation and second observation.
1:19 Since T = T_True, the equation would be clearer to me with T replaced by T_True.
The expressions do get cumbersome with lots of T_Trues in them,
So T_True could be replaced (throughout) by T_T.
No, this substitution is not valid. This is the likelihood as described before. T is a free variable, so the likelihood tells us the relative probability we would have observed what we did, if the true value happens to be a given value of T.
1;22 the T_O,2 likelihood looks identical to the T_O in previous slides.
I think T_O,1 is also that T_O. It shouldn't matter, but it's confusing if the 2 obs are the same (see 1:29).
The first observation prior here is just the likelihood and is not the same as the second observation, it’s the one introduced on the previous slide.
1:24 Is this a definition of the normalization we want to use,
which is maybe unrelated to the denominator in the 2 ob Bayes' (1:18)?
The denominator in 1:18 is the normalization. This is explicitly noted in 1:20. I think it also answers your question about whether students ask about this. When presenting the material, I always stress that the beauty of the ensemble methods is that we are able to ignore the complex details of the denominator.
1:29 It looks like T_O is being redefined here to describe T_O,2,
which is equal to T_O,1 (unless that's changed in 1:22).
It did change in 1:22. We could have put an additional subscript 2 after the O subscript, but that is notationally a mess and I don’t think helps clarify.
1:44 Do you care about spaces in equations in 2.? sigma has more than T.
Fixed.
1:45 A nitpick; the subscripts look like zeros. Should they be capital o?
Fixed.
1:50 Notational consistency; the T subscripts in earlier equations have
the kind of T (update, prior, ...) noted first, then the time (1,2....).
Here time is first, and member, which could be seen as a kind, is second.
It would be more consistent if these new ones were T_{n,time}.
I included all three subscripts on slides 50 and 51, in the order {p/u, t, N}. I then explicitly noted that I was dropping the p/u and t subscripts for the update computations on slide 52.

Section 2

I'm suggesting a lot of notation changes.
I'd be willing to do the grunt work of changing them,
so that your time constraints don't prevent useful,
but maybe not essential, updates from happening.

2:2 "additional variable" ->? "unobserved additional variable"
Done.
2:32 first slide where 'h' appears, but the topic has moved beyond it.
Should it appear in 2:29 or 2:30?
Actually, h should not appear in these slides any more so it needs to be removed here and in 2:53 where it also pops up. Any nonidentity forward operators are subsumed in the computation of the extended state which is how DART has worked since 2007 although most documentation has never been updated. Added a slide after 35 that reminds folks that this schematic is only for identity observations.
2:38 Mixed notation convention here; some variables have subscript k to denote a time,
x has subscript t_k to denote the same thing.
? Does x need the more complete description for later equations?
Related to Y_{t_k} in 2:42.
There is not a notation problem here with x. x can be defined at arbitrary times, the observation related variables are only defined at discrete times for our type of filter. The notation of an integer for the obs stuff and a continuous value for x distinguishes this. However, there is a notation problem on subsequent slides in that Y should have integer indices. This has been fixed.
2:39 Obs. error covariance is used as the standard deviation (variance?) of the normal.
Is this a common (good? useful?) generalized way of describing it?
When I see 'covariance' I start looking for things that it co-varies with,
but I think there aren't any here.
y can be a vector here, so Rk is a covariance matrix. Off-diagonal terms would define correlated observation errors. You catch an even bigger problem in the revision here. We are subsequently assuming that R is diagonal, but that has been dropped in the new slides. This has been added back in now in slide 2:40 (old 2:39).
2:45 x in likelihood doesn't need time subscript?
Nope. It’s a free random variable here.
2:47 "All zeros except a single 1 in the last number of obs columns."
But each column contains an exended state element. Obs are in rows.
Also, wouldn't the identity obs operator have 1s in a different column
for each row, so that each ob is picking out a different state variable?
Maybe leave this description for 2:48.
I removed the last line. Recall that the first num_state columns are the state variables but the last num_obs columns correspond to the observations.
2:48 This makes me think that the description (and in 2:47) should be
"Each row of Hhat_k is all zeros except for a single 1
in a unique column in the observation section." or similar.
Are the 1s always on the obs-section diagonal,
or would it show the true nature better by scrambling the rows a bit?
They are on the diagonal. Reworded the text to state this explicitly.
2:49 The x argument in the H is the extended state here (^),
but wasn't in earlier equations.
Good catch. H_hat operates on x_hat. Changed in all slides.
2:50 Might want to highlight "Extended" to point out that the picture is not redundant.
Is the prior x also an extended state?
Not clear what this issue was.
2:51 (and 2:52) Does this picture want an h (or H)?
See earlier. Not really since obs priors are just part of the extended state.
2:54 Would it be useful to have a picture (or exercize) showing some stage(s)
of the assimilation operating on the regular state and estimated obs
parts of the extended state (different colors)?
This would show that the estimated obs are being processed the same
as the conventional state.
Possibly, but that will have to wait for future enhancements. There is another ticket that points out that our exercises actually don’t really process obs correctly.
2:55 If we want to go the extra mile, this slide could have a different distribution
of estimated obs, to point out the variation with each successive ob.
Hasn’t been a problem, but future revisions could consider that.
2:56 "next time with observations" ->? "next time having observations"
to prevent "with" from referring to "model state" instead of "time"
Reworded.

Section 3
I didn't test scripts and exercizes in this section as of 2024-12-20
3:4 "1/5 chance that this is in any given bin." ->
"On average, 1/5 chance that this is in any given bin"
The current statement is accurate. The proposed revision would be misleading.
Or maybe
"It may be in any of the 5 bins"
This drops the key point, it is equally likely to be in any of the bins.
3:5 Nitpick; the rank histogram Count axis would be more accurate and less cluttered
if only whole numbers were displayed. (and in subsequent slides).
Done.
Do the comments about this slide make the transition that
the ensemble members of an assimilation are draws from a distribution and
the observation serves as the truth?
Technically, there has been no mention of observations here so far. Just draws from some distribution where one draw is labeled as the truth. In fact, the DART tutorial (currently down) and many of our papers discuss the ways to deal with rank histograms for observations as opposed to for OSSEs where we know the truth.
3:9 "Rank histograms for good ensembles" ->? "Rank histograms for good assimilations"
since the RH comes from many ensembles, some good, some bad.
I added “(consistent)” after ‘good’ since this is the technical term, but one that we don’t want to get into here. In this case, all the ensembles are good in that they are draws from the appropriate distribution.
? Do you want to keep the implicit definition in these slides that "truth" is a series
of observations and "ensemble" is a series of draws or members from a sequence
of assimilations? I worry that this further muddies peoples understanding of "ensemble".
A simple change to clarify this might be
"Want truth"... -> "On average, want truth"...
The word observation is not mentioned in this sequence. The statement ‘on average’ is inappropriate as noted above. In the matlab exercises, it is the state variables that are providing the ensemble and the truth from the model run that is having its rank computed.
3:10 Then "A biased ensemble" -> "Biased ensembles"
Changed.
3:11 And so forth.
Done.
3:13 Locating the truth in a bin is OK for an idealized experiment,
but wouldn't it communicate the same ideas to use the observation instead,
which would translate directly to real obs assimilations?
This is subtle and not addressed in DART_LAB. There is a true observation, the value that comes from applying the forward operator but does not add a random observational noise. That quantity is available in obs_sequence files for OSSES, produced by perfect_model_obs. When using real observations as the validation, one must create an ensemble that tries to sample the same distribution, in this case that means that a draw from the observational error must be added to each ensemble member. This is mentioned in a paragraph in my 1996 paper, but then missed in Tom Hamills ‘plagiarism’ of that paper. Not including the obs error part leads to the histograms being more u-shaped than they should be.
3:21 In the Note; "inflation" -> "variance inflation"
would provide an immediate explanation and reinforce that inflation is applied to the variance.
Done
3:34 "Weakly correlated observations" ->? "weak correlations"
since it's discussing correlations between state variables and observations.
Rephrased to be clearer.
3:39 "no weight is being given to state variables on the opposite side of the domain from an observation"
->? "the regression factor falls to 0 near the opposite side of the domain from an observation"...
Rephrased to say an ob has no impact on a state variable on the opposite side of the domain.

Section 4

4:6 This equation isn't used in the rest of the slides, so maybe it's not worth making the notation consistent.
But if it is, the Y still uses the double subscript notation, while x uses a new 'x,t_k',
which is confusing for people who read it as a conditional probability;
the probability of time k happening (and x) given all the previous obs.
Changed it for the Bayes from section 2.
4:15 1. 'an ensemble' -> 'a prior ensemble' would motivate the F^p notation in 2.
4. 'Modify the PDF' Should this be the CDF?
Should be PDF. CDF is used to get the new quantiles but this transformed ensemble is a draw from the transformed PDF.
4:22 Nitpick; 'section 1' -> 'Section 1'
Done.
4:23 Is 'conjugate pairs' just an interesting aside for people who recognize it?
Conjugate priors is a standard statistical technique for parameter estimation. Generally covered in a first year undergrad stats class for stats or applied math majors. I never had such a class, but many of our students have. I left it in as ‘an interesting aside’.
4:24-34 I didn't (can't) confirm the content, but I didn't see anything that looks like a typo
My only suggestion is to put something in the header of the 4th column; Application? Notes?
Notes seems right. Done.
4:42 I don't know the U(0, 1) abbreviation.
This is standard notation for uniform open distribution. I added ‘uniform’.
I gathered later that it means "spans the space, exclusive of the end values".
Correct.
4:49 I'm guessing that the numbers closest to the green stars are the number of members
which have that value, but it's a little challenging to apply that to the curve.
I added text confirming that the numbers are the number of ensemble members that have the value represented by the asterisk.
The dots andor circles don't seem to denote members.
I can decipher it by noting that each increase in CDF of 1/(8+1) represents 1 member,
regardless of the dots and circles.
The dots and circles are standard function notation for discontinuous functions. The dot is the value at that value of the horizontal axis. The open circles indicate that the function is discontinuous there.
4:53 'Correct distribution contours' -> 'Contours for correct distribution'
would be less misinterpretable.
Done. I like ‘less misinterpretable’
4:56 I don't follow what 'This is the probability integral transform' refers to.
It refers to the previous three lines. I put it as a header with the three steps below it.
4:57 Up to here quantiles have been discreet, taken from members representing a distribution.
'Quantile function' seems to be a new thing; a continuous function of a continuous quantile variable.
Is this just the inversion of 4:20?
Yes. The use of the term ‘quantile’ in the label for the figure was confusing. Deleted it and provided a definition of ‘quantile function’.
4:59 Do you want a legend in the second figure?
Probably. Given that there is a legend on 4:60, I’m letting this slide.
4:64 What do the red arrows point to?
If the top and bottom rows are the unobserved posterior and prior, what are the other rows?
Is the vertical axis the cutoff value of the localization?
The other rows are different values of localization as labeled on the axis; yes it’s the cutoff value in DART namelist parlance. Added text boxes describing the point of the arrows.

Section 5

Again, I'm suggesting notation changes. Tell me if I should implement any of them
and point me to the source files.
A motivation slide could be helpful; why and when is inflation needed?
This was discussed in Section 3, these figures are noted as extending that.
5.2 'adaptive error tolerant filter' -> 'adaptive error-tolerant filter' throughout.
Done.
This is what I think is shown here, but it took me a while to (mis)understand it,
based on the current content. If the background assumptions are in the verbal comments
near the beginning, then I probably would have followed it fine.
But if this should be a free-standing slide, then I think the figure
should be smaller and there should be more description.
I think that 'expected separation' brings up questions of what we expect and why,
which don't need to be asked or answered here. If that term needs to be kept,
then in the figure '4.714 SDs' should be changed to '4.7 ESs' (and in following slides).
The text could be

  1. Tiny probabilities at overlap; unlikely they describe the same thing.
  2. The 'consistency threshhold' of 2 distributions is [sqrt(sigmi_p^2 + sigma_o^2)] = ~0.85
  3. Actual separation is 4.7 times larger; one or both are poor representations of the truth
    Model error? Obs error? Random chance?
    I changed to Expected separation. The rest of the proposed text would present some difficulties so not incorporated at this time. This whole section needs theoretical polishing in the future if we keep it.
    5.3 'expected separation' causes additional confusion here; to me, increasing
    the expected separation means "I expect them to appear more separated",
    while inflation makes them appear less separated. So
  4. ->? 'Inflating increases the consistency threshhold (combined uncertainty)
    and makes them appear more consistent.'
    I put expected separation in single quotes to try to indicate that it is the expectation that is increasing. Not sure how else to clarify this.
    5.4 > The second comment (and prior usage) makes me think that
    'prior mean y to obs is' should be 'prior mean to obs y is'
    but 'y' isn't used, so it could be omitted. How about
    'Distance from prior mean to obs distribution; D = N(0, ...) == N(0,theta)'
    Omitted y to clarify.

Then there's y_O or y_0. It's ambiguous and later slides make me think
that it's y_0 (zero) because of the y_k (time index) variables.
Can we omit the subscript here?
The subscript is a letter, not a number. Added glossary slide to define variables to avoid this type of confusion.
5.5 'Use Bayesian"... looks like magic. I assume it doesn't matter at this point.
(Again) does the 'lambda,t_k' notation mean lambda at time t_k
and not 'the probability of t_k and lambda, given all the previous obs'?
Yes. Has been defined now.
Yes, Bayes is magic. All we are doing here is the same old multiplying of a prior times a likelihood. The challenge is that our observation (likelihood) from the instrument is not the likelihood for lambda which is a totally different variable. Hence, the likelihood for lambda is not normal, we lose the beauty of the KF normal*normal, and extreme things must be done.
5.6 If y_k is sufficient notation, how about simplifying Y_{t_k} to Y_k? (throughout)
And use lambda_k instead of lambda,t_k?
Done.
5.7 > Is one intention of the top figure to show that variance "inflation" can reduce the variance?
It's not clear if this slide shows combining distributions or specific probabilities.
The 2 stars in the second figure make me think "specific".
But plotting them on a probability graph is implies that the posterior has 0 probability,
which is probably not what's intended. And the prior and likelihood are both 0.75,
which also doesn't make sense.
Ah! Reading ahead to 5.8 clarifies it somewhat. How about switching the lambda values
in 5.8 and 5.7, so that it's easier to interpret what's happening?
And then the first "inflation" that readers see is actually inflating the prior.
Added additional explanation about where the asterisks came from on each slide.
5.13 'One option is to use Gaussian prior for lambda.'
Wasn't this already assumed in 5.10 (5.6)? Should this be 'posterior'?
Changed to posterior. Good catch.
5.15 This version would be easier for me to follow.

  1. Evaluate the numerator at mean (p_m == p(lambda_u^bar))
    and second point (p_sigma == p(lambda_u^bar + sigma_{lambda,p})
  2. Find sigma_{lambda,u}^2 so N(...) goes through p_m and p_sigma.
  3. Compute as sigma.... where r = p_sigma/p_m
    Done.
    5.16 Labeling the x-axis would emphasize that the picture is not showing
    the updated inflation distribution.
    Done and on subsequent slides.
    5.18 3. would be clearer to me (and still correct?) as
    "Use inflated prior to compute posterior ensemble of estimated y."
    Done.
    5.19 Nitpick; spacing of p() is messed up.
    Fixed.
    5.20 It could be helpful to make the connection between the 'inflation mean' and 'value';
    'Minimum value of inflation (mean), often'...
    Done.
    5.26 2. Shouldn't there be the step of applying the inflation between b. and c.?
    Header on this slide and the previous one was misleading. This is for state variable inflation so no need to inflate the obs.

@kdraeder
Copy link
Contributor

There were a few more updates to Section 5, handled via email:
5.7 'slide 4' -> 'slide 5'
5.8 'slide 4' -> 'slide 5'
5.9 'slide 4' -> 'slide 5'

5.16 nitpick; lnr -> ln(r)

5.28 I see that -1 is now inside the sqrt.
If that's intentional, then the () are not needed.

These are all fixed.


Jeff asked me to review and approve this PR, but I should note that I have read through the responses to Sections 1-4, but not the updated slides. Do I need to review the slides?

@jlaucar
Copy link
Contributor

jlaucar commented Jan 21, 2025 via email

@hkershaw-brown hkershaw-brown removed the blocked waiting on something else label Jan 22, 2025
@hkershaw-brown
Copy link
Member Author

Jeff asked me to review and approve this PR, but I should note that I have read through the responses to Sections 1-4, but not the updated slides. Do I need to review the slides?

Not sure if this is a question for me, but here is my 2 cents. We're at 6 months to get access to the ppt slides, so no one other than Jeff has been able to make changes in that time. The code has been used by students at a workshop, but not reviewed. So I'll leave it up to your judgment what you need to review for slides for this pull request.
If this is important in your decision on what to review- this will be the last pull request where we take new pdfs. Only fixes to existing slides going forwards, then removal of pdfs.

Cheers,
Helen

@jlaucar
Copy link
Contributor

jlaucar commented Jan 22, 2025 via email

@hkershaw-brown
Copy link
Member Author

looking forward to seeing the .rst for the dart tutorial!

@jlaucar
Copy link
Contributor

jlaucar commented Jan 22, 2025 via email

@hkershaw-brown
Copy link
Member Author

I think that the .rst has been updated in the current pull request?

forget it, it is I joke. I will try to be less funny in the future.

@kdraeder
Copy link
Contributor

Here are my notes from reviewing the pdfs, which I fetched from the NCAR/DART qceff_dartlab this morning.

1.29 1.9 and 1.10 define the normals using sigma^2, while this page uses just sigma.
Does it matter?
1.44 is like 1.29
1.52 is like 1.29

1.45 t_O -> T_O or T_{O,2}
1.46 same


2.43 I was under the impression that a smoother uses data
from before and after the time of interest.

2.48 Should the state operated on by m^hat be an extended state vector (x^hat_{t_k})?
2.50 (3),(4) same


3 No fixes needed


4 No fixes needed

@jlaucar
Copy link
Contributor

jlaucar commented Jan 22, 2025 via email

@hkershaw-brown hkershaw-brown added the release! bundle with next release label Jan 23, 2025
@jlaucar
Copy link
Contributor

jlaucar commented Jan 23, 2025

The fixes requested by Kevin in section 1 have been pushed and the powerpoint files have been updated.

Copy link
Contributor

@kdraeder kdraeder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As Jeff pointed out, it will likely never be perfect, but we made a lot of progress in that direction.
In my view the biggest improvement we might make may need to wait until the notation conventions of the community for conditional probabilities, error functions, likelihoods, etc. settle on something that's unambiguous and straightforward for students to use.
Kevin

@hkershaw-brown hkershaw-brown merged commit 9906662 into main Jan 23, 2025
4 checks passed
@hkershaw-brown hkershaw-brown deleted the qceff_dartlab branch January 23, 2025 19:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
DARTLAB release! bundle with next release
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Feature request: DARTLAB QCEFF
3 participants