-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Convert LHS to exploit v2 data model #540
Comments
What the front-end should do with deprecated molecules in snapshots:
|
What should front-end do if molecules are superceded:
|
My notes from 28.3.2023 meeting:
|
Leaving this ticket as is, with the full discussion thread of the features; but now labelled as deep-purple release. #1139 will implement the intermediate fix that enables the v2 update to go live. |
This ticket should (probably?) include the question of how to use and display the numbers, and how they're served from the backend. Let's discuss when @boriskovar-m2ms is back - might need a separate ticket. |
frontend work tracking moved to #1190 |
New scope (2023-05-11 FvD);
XChemAlign and replacement of Fragalysis API need the LHS to be thoroughly reworked, including hierarchical viewing of stuff.
Also addresses epic 5 (how to find stuff on the LHS.)
See mockups further down. Keeping old text because part of spec.
Old text (2022)
Discussed end of January. @duncanpeacock follow-up questions are pasted below.
Before we finalise spec, @reskyner and @phraenquex need to review the workflow of the scientists.
Hi Frank and Rachael
I've had a few more thoughts on uploading partial Target sets as I was starting to create the issue (hence formatting). There's a bit more to it than meets the eye. My main worry is the effect on the existing files (hits_ids, sites and alternate names) that are generated from the matedata during the upload (the metadata will also only apply to part of the Target). Changing the delete/recreate processing for these files might prove a little messy so I wondered what these are used for (as files I mean) and whether the data in these files should really be in tables now?
Hope the below is understandable.
Problem:
Currently, when a target set is uploaded, the processing will upsert the molecules in the Target Set file and automatically delete any existing molecules that are not in the file. There is concern about this automatic deletion of molecules from a usability/tracking.
Proposed Solution:
Diamond would like the deletion to be made optional.
Questions/Thoughts
a) metadata.csv provided with the upload is used to further create further files: hits_ids, sites and alternate names. What to do with these? Metadata.csv will also now be a partial upload so the other files will need to be upserted rather than deleted and recreated (to maintain the link with the target dataset). This makes it more complicated.
b) It might be good to see whether we need these files any more, or whether the data could better be/should be better stored as tables now - if so better to spend the time doing the job properly?
c) I'm also be bit concerned about the .zip file that were uploaded and will be downloadable at the end of the process. The downloadable file will need to reflect the whole Target and we probably need to store the uploaded file for comparison purposes.
The text was updated successfully, but these errors were encountered: