-
-
Notifications
You must be signed in to change notification settings - Fork 824
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dev/core#2745 - Contribution Tokens - Support 'contributionId' #21134
Conversation
This just excludes contact id - see https://lab.civicrm.org/dev/core/-/issues/2745 for extra list
(Standard links)
|
4896aae
to
a0673b7
Compare
$tokenProcessor = new TokenProcessor(\Civi::dispatcher(), [ | ||
'controller' => get_class(), | ||
'smarty' => FALSE, | ||
'schema' => ['contributionId'], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This 'schema' is required if I copy the mechanism from #21144 - but that feels like it should be unnecessary
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Off the cuff, I'm OK with that because (one way or another) the signal has to be provided in the listTokens
use-case, and it's good discipline to have an upfront idea of what data is going in.
OTOH, yes, it's sort of redundant for this use-case. There's enough information in the row-data to infer that contributionId
is present. I suppose an opportune spot to leverage that might be at the start of TokenProcessor->evaluate()
-- recompute schema
to include array_unique(array_merge(array_keys(...each row...))
. And the recently added bit in ActionSchedule::sendMailings()
(vis-a-vis schema
) would be -3 SLOC if the autodetection were better.
a0673b7
to
eba1573
Compare
return $result; | ||
} | ||
|
||
public function getCurrencyFieldName() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This asserts a strong opinion that currency can be fetched as a single DB field. While the civicrm_contribution
record meets that expectation, I don't see why it's a good thing to bake that opinion into the base-class of token source.
* | ||
* @return string | ||
*/ | ||
public function getCurrency($row): string { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This asserts a strong opinion that currency is one-per-row. While the civicrm_contribution
record meets that expectation, I don't see why it's a good thing to bake that opinion into the base-class of token source.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah - it's also overrideable by class. I can't offhand think of an exception to there being zero or one per row as this expects - but if there were more than one it could be added
My main problem with this formulation is that the class is getting so heavy. It adds new mass+surface-area without removing equivalent stuff, and (for me) the purpose of loading data via APIv4+prefetch is to be more useful and also simpler. To try to show where this sense comes from, I did a bit of rundown comparing two previous branches for this (#21109 aka Same signatures/semantics
SignatureWeight: Same Compare: Primary data loading
Opinion: This is the fundamental difference in how they load data. IMHO loading via prefetch+apiv4 is better/more maintainable than loading via SignatureWeight: cont_toke=6; master-contrib-allkeys=3 Compare: Token/field lists
Opinion: These are basically the same idea. Either name is fine. Passing in SignatureWeight:cont_toke=1; master-contrib-allkeys=1
Opinion: Both Opinion: (It seems to me that you want both to push for migration and push against compatibility. I agree migration is more ideal, so I support work on that, but I have a dose of skepticism about its efficacy, and I prefer to keep clear space for compatibility in the interim.) SignatureWeight: cont_toke=3; master-contrib-allkeys=2 Compare: Field reflection
Opinion: Opinion: I'm not really sure that any of these functions belong in SignatureWeight: cont_toke=2; master-contrib-allkeys=1 Compare: Pseudo fields
Opinion: All of these seem broadly redundant to me, so personally I'd love to remove them. However, SignatureWeight: cont_toke=4; master-contrib-allkeys=2 (Note: Tallying up the signatures that differ: 16 in |
@totten yeah - we are definitely on the bikes here - this class is solely used by contribution tokens at the moment and is clearly marked as internal so it's not an external interface - although we might agree one later (preferably in the My plan, once we have a contribution tokens working is, as you note, to add the missing processors - contribution_recur, participant & case and of course I would extend this class - my expectation was that I would implement The reason I pushed quite hard on agreemenr around dev/core#2745 is that once we accept that it's OK to expose all tokens except ones that DON'T make sense then declaring which ones DON'T make sense becomes more helpful that an array of the tokens that seemed helpful when we wrote the array |
Yeah, I think that's wise leaving that scope for another day. FWIW, I don't think there's much of a difference between BAO vs CRM_Core_PseudoConstant vs APIv4. In all cases, one wraps the read operation with some kind of
Right, which I think creates the pressure for us to like the base class. When we add more top, it'll become harder to move the bits underneath.
TBH, on a policy level, it doesn't really matter to me if the fields are default-off(opt-in) or default-on(opt-out) for token support. Think of the risks as "probability of mistake * cost of mistake". For simple numbers, guesstimate that 90% of fields should be enabled and 10% should be disabled. With default-on, the default is mistaken only 10% of the time. (Default-off is much worse - 90% mistaken.) But. The cost of removing a bad token is higher than the cost of adding an omitted token. To add an omitted one, you just update the list. To remove a bad one, you need to migrate/break/replace/communicate (and you may have some additional impact - like revealing sensitive data). I really can't say one is better than the other. I'm ambivalent. I'm OK with default-on, though. The reason I submitted #21145 was because default-on seemed important to you, and I didn't want that presenting as some kind of reason to block the changes in the data-loading / simplification (which is the part that seems important to me). |
@totten but #21145 isn't default on - it's default list all the fields out one by one? Where I was trying to get to was
I DO see the base class as rather less locked in place than you do - since we'll be only using it from tested core code at the moment the inner workings should be more flexible for now & for the next few months |
What was the reason for creating the new |
@mattwire - yeah it was always considered that it might be temporary & the 2 might be reconciled in some way in the end - the contribution class was starting from the point of not being exposed at all & so the stuff that was generic was moved onto the EntityTokens class. It should be a good basis for new classes (contribution_recur, participant) but the classes that already exist need a bit of effort to find the inconsistencies and standardise on a consistent token set. I wasn't going to try looking at reconciling with activity & it's trait down the track |
Also - it's important to note this is an internal core class with 100% test cover so we can move stuff around as long as tests pass. I'm less sure with the trait - ie. hopefully no extensions are using it because it's not a supported interface but I don't know about the test cover |
@totten so I'm trying to come back to this & find where the resolvable part is. I guess to outline my priorites
|
I've been through a few drafts trying to find a way to reply, but they kept getting too long. Trying again as a speed-version (now with lots of typos)...
|
@totten I've been thinking about this & I think the fact that we are having this level of architectural discussion over an incremental refactoring PR on a class I created a couple of weeks ago in order to work through an incremental refactoring process is actually a pretty clear sign that the process I was attempting to do isn't gonna work. We can negotiate our way through this one - but if this class has become a class with an agreed architecture rather than part of a refactoring and test-writing process then it no longer has the flexibility for me to continue with that process. And that is also OK - in that it was always gonna be unpaid work & choosing not to do it is also a positive thing. So - assuming I'm no longer committed to looking at any other entities - there are 2 ways forward I see
Either way - with either/or merged I can write what I need for recurring tokens into our wmf extension & the combination of that + this merged will unblock me to review your PR and we can call it a day & just be happy we cleaned up one entity to the point where it is broadly available, tested & works consistently |
We discussed out-of-band and agreed (a) this does work for adding Topics that may lead to major changes in this class will be (a) deduplicating the loading code, (b) allowing multilingual pseudoconstant evaluation, (c) sharing a lazy-load mechanism between TokenProcessor+Smarty. |
Overview
This salvages the useful part (the test) from #21058 - it should fail for now but once #21109 is merged I'll rebase out that & we'll be left with the (still failing) test that will cover contribution tokens once they are listening properly - @totten had ideas on how to make it listen in #21079 - I'm also going to close that one as it needs to be re-worked to pass this test once #21109 is merged - but at this stage it's mostly full of stuff that is not relevant now
Before