Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incorrect result for Onyx update with delete and object property delete #615

Open
neil-marcellini opened this issue Feb 13, 2025 · 28 comments
Assignees

Comments

@neil-marcellini
Copy link
Contributor

Problem

When there is an item set in a collection in Onyx, and Onyx.update is called with two merge updates where the first sets the collection item to null in order to remove it, and the second one sets a property of the object to null, then the item remains unchanged in the collection when it should be set to an empty object.

Please see the related App issue for more context.

Issue example
// Given we have a report action to set into Onyx
window.Onyx.set('reportActions_8865516015258724', {
    '5738984309614092595': {
        reportActionID: '5738984309614092595',
        someKey: 'someValue',
    },
});

const queuedUpdates = [
    {
        key: 'reportActions_8865516015258724',
        onyxMethod: 'merge',
        value: {
            '5738984309614092595': null,
        },
    },
    {
        onyxMethod: 'merge',
        key: 'reportActions_8865516015258724',
        value: {
            '5738984309614092595': {
                pendingAction: null,
            },
        },
    },
];
window.Onyx.update(queuedUpdates);

// Then the object under the key 5738984309614092595 in the collection reportActions_8865516015258724, should be set to an empty object, but it's actually set to the original value.
window.Onyx.get('reportActions_8865516015258724').then((val) => console.log('Result after updates', val['5738984309614092595']));
Proof that setting an action to an empty object will remove it from the UI in Expensify/App
// Get actions from a report. Replace the id (8865516015258724) with the reportID you want to test.
window.Onyx.get('reportActions_8865516015258724').then((val) => {
    const actionsArray = Object.values(val);
    actionsArray.sort((a, b) => new Date(a.created) - new Date(b.created));

    // Replace the most recent action with an empty object
    const replacedActionID = actionsArray[actionsArray.length - 1].reportActionID;
    actionsArray[actionsArray.length - 1] = {};

    const sortedActions = actionsArray.reduce((acc, action) => {
        acc[action.reportActionID ?? replacedActionID] = action;
        return acc;
    }, {});

    console.log('console test sortedActions', sortedActions);

    // Set the new collection into Onyx
    window.Onyx.set('reportActions_8865516015258724', sortedActions);

    // You should see that the most recent action is removed from the UI
});

Solution

Find the root cause and fix it.

@fabioh8010
Copy link
Contributor

Hey, I'm Fábio from Callstack – expert agency – and I would like to work on this issue.

@fabioh8010
Copy link
Contributor

fabioh8010 commented Feb 18, 2025

I've created a unit test for this case here: main...callstack-internal:react-native-onyx:bugfix/615

From my analysis, it looks like this behavior happens because:

  1. These two merge operations inside Onyx.update() are being merged and batched here. So basically merging { '5738984309614092595': { pendingAction: null } } over { '5738984309614092595': null } will result to { '5738984309614092595': { pendingAction: null } }.
  2. Then, we pass the batched changes to Onyx.merge() which will merge { '5738984309614092595': { pendingAction: null } } with { '5738984309614092595': { reportActionID: '5738984309614092595', someKey: 'someValue' } }.
  3. Since pendingAction is null the property is discarded, so we are basically merging { '5738984309614092595': { reportActionID: '5738984309614092595', someKey: 'someValue' } } with { '5738984309614092595': {} } which results in the original object because it's a merge operation.

If we change the second operation to set for example, everything works as desired.

    {
        onyxMethod: 'set',
        key: 'reportActions_8865516015258724',
        value: {
            '5738984309614092595': {
                pendingAction: null,
            },
        },
    },

In summary, the current behavior looks correct to me.

@neil-marcellini
Copy link
Contributor Author

@fabioh8010 thanks for the investigation! It mostly makes sense, although I'm a bit confused why the line you linked to in Onyx is calling multiSet. Maybe you meant to link to this line?

You describe how the updates are merged together and I think that's where the problem lies. If we're setting a value for a key to null then we need to fully remove that key. The subsequent update removing the pendingAction property could be ignored since it will do nothing. The way we merge the updates together currently changes the meaning from "clear the value for the action with key '5738984309614092595', and then remove the pendingAction property on that action" to "remove the pendingAction property on that action". These two are not the same, so batching the updates this way is invalid.

Also, you recommend changing the action to set. Although that could work for the exact example I gave in this issue, it doesn't really work in general for our app. In the linked App issue an optimistic report preview action is created on the client for a new IOU report, because it hasn't loaded the existing IOU report and therefore doesn't know it exists and should be used. The backend finds the existing IOU report and uses that, so it sends this update to merge null for that action value to clear it from the UI. The pending action update comes from the Onyx successData on the client. When the requests succeeds we want to clear the pending action. It must be a merge, because if we do actually create a new report preview action, we can't wipe out the action entirely; we only want to remove this specific key. A set operations sets the object to an empty one because it only has the pendingAction key which is set to null.

Therefore using set doesn't seem like an option and we should fix the Onyx behavior. How does that sound? Lmk if you have questions.

@fabioh8010
Copy link
Contributor

fabioh8010 commented Feb 20, 2025

@neil-marcellini Sorry, I meant this line 🙈

I understand your reasoning about the updates merging, and we do have such logic in place.

For every update passed to Onyx.update(), we enqueue them if they are merges or sets. When enqueing merge operations, we have a condition that if the value is null we discard the previous operations, so we all remove previous updates before the null update.

With the updateQueue processed we'll iterate over each update, batching the operations with applyMerge for each key. If the first operation of that batch is to set the value to null, we just do a Onyx.set() with the batched changes so we can replace correctly the data.

So let's say you have an initial state like this:

await Onyx.set(`${ONYX_KEYS.COLLECTION.TEST_UPDATE}entry1`, {
    id: 'sub_entry1',
    someKey: 'someValue',
});

And you call Onyx.update() with these updates:

const queuedUpdates: OnyxUpdate[] = [
    {
        key: `${ONYX_KEYS.COLLECTION.TEST_UPDATE}entry1`,
        onyxMethod: 'merge',
        value: null,
    },
    {
        key: `${ONYX_KEYS.COLLECTION.TEST_UPDATE}entry1`,
        onyxMethod: 'merge',
        value: {
            pendingAction: null,
        },
    },
];

await Onyx.update(queuedUpdates);

The first update in the queue set the value to null. The batched changes will be { pendingAction: null } but as the first update is null we instead use Onyx.set() to completely replace the data. This is the behavior that you are looking for.

Now, let's analyse your situation. You have an initial state like this:

await Onyx.set(`${ONYX_KEYS.COLLECTION.TEST_UPDATE}entry1`, {
    sub_entry1: {
        id: 'sub_entry1',
        someKey: 'someValue',
    },
});

Notice that now we are dealing with a collection of collections (report actions). You call Onyx.update() with these changes:

const queuedUpdates: OnyxUpdate[] = [
    {
        key: `${ONYX_KEYS.COLLECTION.TEST_UPDATE}entry1`,
        onyxMethod: 'merge',
        value: {
            sub_entry1: null,
        },
    },
    {
        key: `${ONYX_KEYS.COLLECTION.TEST_UPDATE}entry1`,
        onyxMethod: 'merge',
        value: {
            sub_entry1: {
                pendingAction: null,
            },
        },
    },
];

The first update in the queue set the value is not null anymore. This is because you want to "reset" a single report action, and not the entire collection of report actions of that report. The batched changes will be { sub_entry1: { pendingAction: null } } but as the first update is not null we'll call Onyx.merge() instead.

Onyx.merge() will merge your initial object with { sub_entry1: { pendingAction: null } }, which results in the same object because pendingAction change is considered useless (your initial data doesn't have pendingAction and we are trying to change it to null, which is basically what we already have – no pendingAction property) and discarded from the process.


So, in summary, we have the logic you are mentioning but it doesn't apply recursively inside the objects when doing merges. I still think that, considering all this above, the current logic is correct regarding the batching of the merges. When dealing with inner properties/objects during merge I think we don't want null to "override" the other batched changes in these deep nested properties like we do above, that should be the job of a set operation where we explicity want to reset some value.

Some options I can think of for now:

  1. Disable all merge batching inside Onyx.update() and Onyx.merge(). With that every merge operation will be treated separately and thus we would have the desired outcome regarding this issue. However by doing this we would be losing in terms of performances and possibly creating UI "flash" updates because the merges are not batched anymore.
  2. Create a logic to create "sub-batches" of batches in a recursive way during merges. So for each "sub-batch" we can decide whether to use Onyx.set() or Onyx.merge() here for example. It's just a rough idea so I don't know yet the feasibility and implications of such decision.
  3. Maybe re-design report actions Onyx state to stop being a collection of collections like it is today. So we would have something like reportActions_8865516015258724_5738984309614092595 where the value is just the report action ({ reportActionID: '5738984309614092595', someKey: 'someValue' }) and when doing Onyx updates to update the value to null it would reset it correctly. It's a big re-design though that would require BE and FE changes.

About this issue in specific, I have a question. @neil-marcellini Why do we have these two updates in sequence?

const queuedUpdates = [
    {
        key: 'reportActions_8865516015258724',
        onyxMethod: 'merge',
        value: {
            '5738984309614092595': null,
        },
    },
    {
        onyxMethod: 'merge',
        key: 'reportActions_8865516015258724',
        value: {
            '5738984309614092595': {
                pendingAction: null,
            },
        },
    },
];

If we are resetting the data and right after it setting to { pendingAction: null } which basically means a empty object anyway, couldn't we just have one update?

const queuedUpdates = [
    {
        key: 'reportActions_8865516015258724',
        onyxMethod: 'merge',
        value: {
            '5738984309614092595': null,
        },
    },
];

That would solve the problem.

@fabioh8010
Copy link
Contributor

I edited my comment to clarify more things and fix a mistake

@neil-marcellini
Copy link
Contributor Author

We talked via Slack DM and agreed that the problem lies in Onyx. Fabio is going to create a draft PR with a failing test and work on a proposal.

@fabioh8010
Copy link
Contributor

Here's the WIP Draft PR: #619

@fabioh8010
Copy link
Contributor

fabioh8010 commented Feb 25, 2025

Proposal

Please re-state the problem that we are trying to solve in this issue.

When calling Onyx.update() / Onyx.merge(), individual updates of the same key can be enqueued and batched together into one update in order to improve performance. This process is done by merging all updates together into only one, and then using that update to apply to the Onyx store.

Currently when batching these updates, if there is an update that sets a value to null at the top level object it will reset the current value and apply the subsequent changes, as we want to fully reset the data (see here for Onyx.update() and here for Onyx.merge()). However, when null is set to a nested property, it does not reset the current value before applying the subsequent changes, preventing that property to be fully reset as desired.

What is the root cause of that problem?

The problem happens because, during batching of the merges/updates and unlike the top-level changes, there is no mechanism to make the current value of a nested property be reset after a null change, making the subsequent changes just update the current value of the property instead of fully resetting it.

In summary, when batching top-level objects:

Onyx.set('someKey', {
    subKey: 'someValue',
});

const queuedUpdates = [
    {
        key: 'someKey',
        onyxMethod: 'merge',
        value: null,
    },
    {
        key: 'someKey',
        onyxMethod: 'merge',
        value: {
            newKey: 'wowNewValue',
        },
    },
];

Onyx.update(queuedUpdates);

The final result will be {"newKey":"wowNewValue"}, which is correct because changing the top-level object to null in the first update will make it ignore the current value and only apply the subsequent updates, fully resetting the data.

Now, when batching nested properties:

Onyx.set('someKey', {
    sub_entry1: {
        subKey: 'someValue',
    }
});

const queuedUpdates = [
    {
        key: 'someKey',
        onyxMethod: 'merge',
        value: {
            sub_entry1: null // nested property, not the top-level one
        },
    },
    {
        key: 'someKey',
        onyxMethod: 'merge',
        value: {
            sub_entry1: {
                newKey: 'wowNewValue',
            }
        },
    },
];

Onyx.update(queuedUpdates);

The final result will be {"sub_entry1":{"subKey":"someValue","newKey":"wowNewValue"}}, but we want this same resetting behavior to work with nested properties, so it should be {"sub_entry1":{"newKey":"wowNewValue"}}.

This issue happens to both Onyx.update() / Onyx.merge() methods, as both use basically the same logic to batch merge changes together before applying to the store.

What changes do you think we should make in order to solve the problem?

  1. Change Onyx.merge()/Onyx.update() batching logic (applyMerge/fastMerge/mergeObject) to replace the current value of a nested property after a null change in it, so the subsequent updates of that batch in this nested property can fully reset its data.
  2. Refactor Onyx.update() to stop doing its own batching as it's a redundant operation that is already being done in Onyx.merge().

I've created a new WIP Draft PR here, since the first one was addressing this problem with a wrong solution.

What specific scenarios should we cover in automated tests to prevent reintroducing this issue in the future?

Unit tests will be added to cover this specific scenario.

What alternative solutions did you explore? (Optional)

N/A

@fabioh8010
Copy link
Contributor

Updates:

  • Found the fix for the new test scenario and everything seems to be working now.
  • Next steps will be testing this in realtime in E/App.

@neil-marcellini
Copy link
Contributor Author

Please re-state the problem that we are trying to solve in this issue.

When batching multiple Onyx merge updates, the batching mechanism does not give priority to null merges (e.g. we want to remove a property) and instead allow subsequent updates to change the same property, invalidating the desired purpose of null.

I don't quite understand the wording for this. It's ok to allow subsequent updates to change the same property.

I would say something like "When Onyx.update is called with updates for a collection, where the first update sets a value to null for a key that is not at the top level, and another update sets a property to null for that same key, then the value remains unchanged for Onyx subscribers when it should be set to an empty object."

What is the root cause of that problem?

The problem happens because, during batching, the priority of null changes are not applied recursively inside the objects when doing merges. Currently this logic only works if you merge the top-level object with null.

Sounds good. Please elaborate on "the priority of null changes" as you did in previous comments.

What changes do you think we should make in order to solve the problem?

Change fastMerge / mergeObject logic to discard merging changes over a null value if we are batching updates and we are dealing with nested properties.

I'm not sure exactly what you mean, but "discarding" merges over a null value sounds slightly wrong. For example a value may be set to null and then later set to an object with some properties, and we wouldn't want to discard that subsequent update. Do we only discard when batching and can you please elaborate more on how that works with some examples?

It sounds like you have found a pretty good approach from your testing. I want to make sure I understand it well.

@fabioh8010
Copy link
Contributor

Hi @neil-marcellini , sorry about the proposal. I did it when I was still evaluating a solution, so it looked a little bit generic. Now I edited my comment to a more complete version of it, please have a look.

@neil-marcellini
Copy link
Contributor Author

That's ok, thanks for updating it. I'll review it again now.

if there is an update that sets a value to null at the top level it will ignore the subsequent updates as we want to reset that data.

Are you sure that's the case? Oh wow, I tested something similar to your example and it's true. I don't know if it's causing any problems specifically because it would be unusual for us to clear a top level value entirely and then have another update filling it in, but it's not what I would expect intuitively when imagining how Onyx.update would work.

First, change OnyxUtils.applyMerge() to include a flag specifying if we are batching merge changes or not. If yes, pass this flag to fastMerge and mergeObject which will be modified to give priority over null properties when merging data into them. Let's consider this initial state as example:

I'm not quite sure what you mean by "give priority over null properties when merging data into them." I looked at your draftPR and I see that we set the destination value to null if the target value is null, which will ignore subsequent updates for a value after it has been set to null. Is that an accurate verbal description?

if (isMergeableObject(sourceValue)) {
if (isBatchingMergeChanges && targetValue === null) {
destination[key] = null;
} else {
// If the target value is null or undefined, we need to fallback to an empty object,
// so that we can still use "fastMerge" to merge the source value,
// to ensure that nested null values are removed from the merged object.
const targetValueWithFallback = (targetValue ?? {}) as TObject;
destination[key] = fastMerge(targetValueWithFallback, sourceValue, shouldRemoveNestedNulls, isBatchingMergeChanges);
}
} else if (isBatchingMergeChanges && destination[key] === null) {
destination[key] = null;

Next steps
I'm going to check with some others on the internal team to see if they're ok with Onyx working with way for a batch of updates. If so then I'm good to go ahead with it.


Side note: I was going to suggest we set shouldRemoveNestedNulls true here, but I see there's a comment explaining why we shouldn't do that.

// We first only merge the changes, so we can provide these to the native implementation (SQLite uses only delta changes in "JSON_PATCH" to merge)
// We don't want to remove null values from the "batchedDeltaChanges", because SQLite uses them to remove keys from storage natively.
const validChanges = mergeQueue[key].filter((change) => {
const {isCompatible, existingValueType, newValueType} = utils.checkCompatibilityWithExistingValue(change, existingValue);
if (!isCompatible) {
Logger.logAlert(logMessages.incompatibleUpdateAlert(key, 'merge', existingValueType, newValueType));
}
return isCompatible;
}) as Array<OnyxInput<TKey>>;
if (!validChanges.length) {
return Promise.resolve();
}
const batchedDeltaChanges = OnyxUtils.applyMerge(undefined, validChanges, false, true);

@neil-marcellini
Copy link
Contributor Author

Posted in Slack here. I think I'm good with the proposal after thinking about it a bit more. Let's wait a day or two before merging the PR to see if anyone objects to the behavior. Please DM me when the PR is read for a full review.

@neil-marcellini
Copy link
Contributor Author

Rory and Chris chimed in and agree the current behavior of ignoring subsequent updates doesn't make sense, so let's pause any current efforts until we find a new approach.

@chrispader
Copy link
Contributor

@fabioh8010 @neil-marcellini i think we should be able to re-use most of the logic for combining and applying multiple delta changes from Onyx.merge.

IMO, Onyx.update should only act as the distributing switch, that calls the necessary operations in a efficient way, such as Onyx.set/Onyx.multiSet/Onyx.merge/ etc.

@fabioh8010 I'll let you decide whether you want to work on this refactor, i'm happy to help in any way whenever you need me!

@fabioh8010
Copy link
Contributor

@neil-marcellini @chrispader Following our last discussions on Slack, I've updated my proposal again. I believe it's much clear now what is the problem and what we want to achieve. Please have a look and let me know if you have any further questions!

@neil-marcellini
Copy link
Contributor Author

Very nice! Your updated proposal looks solid to me. Please let me know when the PR is ready for review.

@chrispader
Copy link
Contributor

@neil-marcellini @chrispader Following our last discussions on Slack, I've updated my proposal again. I believe it's much clear now what is the problem and what we want to achieve. Please have a look and let me know if you have any further questions!

looks good to me too! 🙌🏼

@fabioh8010
Copy link
Contributor

Updates:

  • Fixed my tests to output the errors for the second test scenario.
  • Still working in a solution.

@fabioh8010
Copy link
Contributor

Updates:

  • I've implemented a first solution for the issue, by applying temporary "markers" during object merging to later use them to replace the old objects when applicable, without adding any additional loops and iterations.
  • It works well for Onyx.merge() and partially for Onyx.update(), some of my tests are still failing because there is a logic inside Onyx.update() that will batch the changes if we have multiple changes in collections and will use either Onyx.mergeCollection() or Onyx.multiSet() to apply them. The problem is that Onyx.mergeCollection() works quite differently from Onyx.merge(), making my solution useless for this particular case. I assume we won't want to remove this part during the Onyx.update() refactor because it seems beneficial as stated in the code, so I'm trying to figure out a way to make my solution work inside Onyx.mergeCollection().

@fabioh8010
Copy link
Contributor

@chrispader Do you think we would want this part to stay during the Onyx.update() refactor? According to the comments:

Group all the collection-related keys and update each collection in a single mergeCollection call.
This is needed to prevent multiple mergeCollection calls for the same collection and merge calls for the individual items of the said collection.
This way, we ensure there is no race condition in the queued updates of the same key.

@chrispader
Copy link
Contributor

I've implemented a first solution for the issue, by applying temporary "markers" during object merging to later use them to replace the old objects when applicable, without adding any additional loops and iterations.

that sounds very good, i'd probably have done it exactly the same 🙌🏼

@chrispader
Copy link
Contributor

@chrispader Do you think we would want this part to stay during the Onyx.update() refactor? According to the comments:

Group all the collection-related keys and update each collection in a single mergeCollection call.
This is needed to prevent multiple mergeCollection calls for the same collection and merge calls for the individual items of the said collection.
This way, we ensure there is no race condition in the queued updates of the same key.

@fabioh8010 i'm not sure either if we'd want to keep that logic or if we can safely remove this. I'd definitely try to remove as much duplicate code around null handling and batching, but i'm not sure if we can just switch from Onyx.mergeCollection/Onyx.multiMerge to a regular Onyx.merge and vice versa.

@chrispader
Copy link
Contributor

chrispader commented Mar 11, 2025

To be honest, i feel like we should refactor the API design of Onyx more holistically in order to fix this issue. We should remove all redundant/duplicate code around merging, batching and null key removal (null handling) altogether.

Batching and key removal (null handling)

We currently have at least 3-4 spots where we handle batching and (nested) key removal differently:

  • Onyx.merge: We are batching Onyx.merge operations from the merge queue into batchedDeltaChanges, and then pre-merge the changes with the existing value, which will remove the nested null values. We then pass the batchedDeltaChanges to SQLite, whereas we use the preMergedValue for idb-keyval.
  • Onyx.mergeCollection: We use Storage.multiMerge under the hood, which does not remove any nested null values.
  • Onyx.update (1): We replace MERGE_COLLECTION operations with enqueueMergeOperation calls and then combine them back into a single Onyx.mergeCollection. The splitting up of keys into multiSet and multiMerge operations seems like a duplicate of the logic in Onyx.mergeCollection
  • Onyx-update (2): For the remaining individual keys, we're just using Onyx.merge, which will again remove (nested) null values.
  • SQLiteStorageProvider.multiMerge: We let SQLite handle merging through JSON_PATCH, which would technically also accept delta changes. We pre-merge the changes from the merge queue in Onyx.merge, so we don't really facilitate this, but i don't think we can work around that, since we need to broadcast the update.
  • IDBKeyValProvider.multiMerge: In IDBKeyValProvider.multiMerge we're also handling null values separately through fastMerge. Again, i feel like this is redundant and we could simplify all of this into a single implementation responsible for batching and removing null values.

Therefore, i think we have lots of room for removing redundant code around batching and null value handling in the codebase.

(We technically also have duplicate code for null value removal for set operations, though it's not that complex there.)

Streamlining the storage provider backends

A while back i added an common interface for all storage providers, so the Onyx public API side can except the same functionality from each of the storage providers. Still, the backend implementations don't really match up with what the function names suggest, e.g. multiMerge behaves differently in idb-keyval than in SQLite.

We should therefore think about also streamlining the storage provider implementations, so we can expect every storage provider to do exactly the same thing. On SQLite, this would mean that we try to facilitate as much of SQLites low-level functionality (such as JSON_PATCH and delta changes), whereas in the idb-keyval provider, we implement the missing logic there, instead of having it spread around the whole Onyx codebase.

Imo, ideally we want to handle all the null checking, batching and merging EITHER in the storage layer or the public Onyx API layer, but not in both. I understand, that we have e.g. the broadcastUpdate function, which we use to update subscribers and we therefore need to have a pre-merged value, but maybe we can also optimize this?

In general, i just feel like we have way to much redundant code and especially too many loops around keys and for merging existing values with updates.


I'm tagging the people here that we're involved in lots of these changes back then and who might have more ideas or opinions on this. I'm very curious what you guys think about the current state of Onyx. I hope these thoughts were not to vague. Lmk what you think!

cc @fabioh8010 @blazejkustra @tgolen @marcaaron @neil-marcellini

@blazejkustra
Copy link
Contributor

This idea seems reasonable and would definitely help reduce redundancy. I’m a bit concerned it might become a scope creep. Do we have confidence that our current tests are enough to catch issues if we go ahead with these refactors?

@fabioh8010
Copy link
Contributor

I like the proposed refactor, but I would prefer to make less impactful changes right now just to be able to get my fix working for all situations, respectively make Onyx.mergeCollection work with that.

The reason is to avoid mixing the two things (big holistic refactor and fix to solve the issue's problem) together in one big PR, so I'm trying to see a way to change Onyx.mergeCollection to make it work with my fix.

@chrispader
Copy link
Contributor

This idea seems reasonable and would definitely help reduce redundancy. I’m a bit concerned it might become a scope creep. Do we have confidence that our current tests are enough to catch issues if we go ahead with these refactors?

I like the proposed refactor, but I would prefer to make less impactful changes right now just to be able to get my fix working for all situations, respectively make Onyx.mergeCollection work with that.

I agree, we shouldn't handle the refactor in this PR, but in another one. I still think it would be worth to tackle this refactor after we find a fix for this issue

Do we have confidence that our current tests are enough to catch issues if we go ahead with these refactors?

No, it would definitely be a good idea to expand the test suite for such a big refactor

@tgolen
Copy link
Collaborator

tgolen commented Mar 12, 2025

@chrispader I think you bring up very valid concerns. Something we could consider is to develop a 2.1 version in a feature branch that incorporates a lot of these ideas, while meeting the existing test cases and also expanding them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants