Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

failure in share import prep re duplicate name and usage #1828

Closed
phillxnet opened this issue Sep 22, 2017 · 9 comments · Fixed by #1838
Closed

failure in share import prep re duplicate name and usage #1828

phillxnet opened this issue Sep 22, 2017 · 9 comments · Fixed by #1838
Assignees
Labels

Comments

@phillxnet
Copy link
Member

phillxnet commented Sep 22, 2017

Thanks to forum member @magicalyak for reporting and helping to diagnose this issue. Under certain circumstances when a share on disk is not currently reflected in the db an attempt is made to import and mount each share. This effort includes a check to establish if there are duplicate share names which is unfortunately circumvented due to a bug which causes this check to never be executed, along with various share calculations that would otherwise also be executed.

Please see the following forum thread for reference of this issues origin:
https://forum.rockstor.com/t/subvolumes-not-mounted-but-pools-mount/3797

@magicalyak
Copy link
Contributor

Since there isn't a way to mount duplicates in /mnt2 due to naming, is there a way to mount the reported subvolid in rockstor or even just pick the first one and once a dupe is detected just not mount it (spit an error out in the logs and move on)?

@phillxnet
Copy link
Member Author

I am currently working on this issue and have established it's root cause as related to incomplete changes relating to issue #930. I am currently in the process of refactoring the relevant code to help clarify further development and a pending fix re the side effects indicated.

@phillxnet
Copy link
Member Author

@magicalyak Good point but can you please remove it from this issue and open a new feature request or better still a design discussion via a forum thread with the same as it is way over broad for this issue as defined and referenced. Thanks.

@phillxnet
Copy link
Member Author

@magicalyak I have now corrected my omission re the indicated referenced forum thread.

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Sep 22, 2017
A previous refactor failed to update the comments
re the passed parameter: update accordingly.
phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Sep 22, 2017
By distinguishing the temp var indices and their source
we improve readability and help with diagnosis going
forward.
@phillxnet
Copy link
Member Author

The core cause of this issue is a legacy parameter type used when calling share_info but as this call is within a large try clause there are several potential effects that may result as the intended code has not been in effect for a couple of years. I am about to update the erroneous call and begin testing prior to submitting a pull request.

@phillxnet
Copy link
Member Author

Replicating @magicalyak original trigger (that of multiple shares by the same name on different pools):

ERROR [storageadmin.middleware:33] 'unicode' object has no attribute 'name'
Traceback (most recent call last):
  File "/opt/rockstor-dev/eggs/Django-1.8.16-py2.7.egg/django/core/handlers/base.py", line 132, in get_response
    response = wrapped_callback(request, *callback_args, **callback_kwargs)
  File "/opt/rockstor-dev/eggs/Django-1.8.16-py2.7.egg/django/views/decorators/csrf.py", line 58, in wrapped_view
    return view_func(*args, **kwargs)
  File "/opt/rockstor-dev/eggs/Django-1.8.16-py2.7.egg/django/views/generic/base.py", line 71, in view
    return self.dispatch(request, *args, **kwargs)
  File "/opt/rockstor-dev/eggs/djangorestframework-3.1.1-py2.7.egg/rest_framework/views.py", line 452, in dispatch
    response = self.handle_exception(exc)
  File "/opt/rockstor-dev/eggs/djangorestframework-3.1.1-py2.7.egg/rest_framework/views.py", line 449, in dispatch
    response = handler(request, *args, **kwargs)
  File "/opt/rockstor-dev/eggs/Django-1.8.16-py2.7.egg/django/utils/decorators.py", line 145, in inner
    return func(*args, **kwargs)
  File "/opt/rockstor-dev/src/rockstor/storageadmin/views/command.py", line 95, in post
    import_shares(p, request)
  File "/opt/rockstor-dev/src/rockstor/storageadmin/views/share_helpers.py", line 111, in import_shares
    cshare.pool.name))
  File "/opt/rockstor-dev/src/rockstor/fs/btrfs.py", line 420, in shares_info
    mnt_pt = mount_root(pool)
  File "/opt/rockstor-dev/src/rockstor/fs/btrfs.py", line 229, in mount_root
    root_pool_mnt = DEFAULT_MNT_DIR + pool.name
AttributeError: 'unicode' object has no attribute 'name'

after the 'second' pool containing the duplicate (by name) share is re-attached and a

systemctl start rockstor-bootstrap

is executed.

N.B. we do not see the intended Exception to catch this situation during import, although one does exist and works within the Web-UI. The intended exception is thought to be blocked by the prior legacy call to share_info() which itself causes the above exception which in turn bypasses the intended:

"Another pool(%s) has a Share with this same name(%s) as this pool(%s). This configuration is not supported. You can delete one of them manually with this command: ..."

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Sep 22, 2017
phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Sep 22, 2017
Previously we called shares_info() with a pool mount
point but we now pass the pool object. This avoids
masking future exceptions within this try block as
this command pre-commit would always fail.
@phillxnet
Copy link
Member Author

Correcting the legacy call to share_info() from within the try clause of import_shares() gives the following, intended, log entry to identify the issue found:

ERROR [storageadmin.util:44] exception: Another pool(pool-on-luks-dev) has a Share with this same name(luks-share) as this pool(luks-pool-on-bcache). This configuration is not supported. You can delete one of them manually with this command: btrfs subvol delete /mnt2/[pool name]/luks-share
Traceback (most recent call last):
  File "/opt/rockstor-dev/src/rockstor/storageadmin/views/share_helpers.py", line 111, in import_shares
    cshare = Share.objects.get(name=s_on_disk)
  File "/opt/rockstor-dev/eggs/Django-1.8.16-py2.7.egg/django/db/models/manager.py", line 127, in manager_method
    return getattr(self.get_queryset(), name)(*args, **kwargs)
  File "/opt/rockstor-dev/eggs/Django-1.8.16-py2.7.egg/django/db/models/query.py", line 334, in get
    self.model._meta.object_name
DoesNotExist: Share matching query does not exist.
[22/Sep/2017 16:54:36] DEBUG [storageadmin.util:45] Current Rockstor version: unknown

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Sep 22, 2017
Moved to .format and added warning re data loss.
@phillxnet
Copy link
Member Author

Log error message updated to reflect the data loss component of the suggested command:

"Another pool (pool-on-luks-dev) has a Share with this same name (luks-share) as this pool (luks-pool-on-bcache). This configuration is not supported. You can delete one of them manually with the following command: "btrfs subvol delete /mnt2/[pool name]/luks-share" WARNING this will remove the entire contents of that subvolume."

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Sep 23, 2017
Attempting to improve readability re var names
to indicate origin ie pool db related, db generally,
or pool actual.
phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Sep 23, 2017
As we now have a debug log switch we can leave
these in place to help with diagnosing future
share import issues. Plus comment review.
phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Sep 23, 2017
Rarely executed code: if a share was moved from
one pool to another between db updates, and the
db retrieval order of pools was appropriate. An
existing db share entry from a prior pool to
that under share import would be updated. However
this 'intended' update would fail due to a code
formatting issue resulting in the following
error: "ValueError: too many values to unpack".
phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Sep 23, 2017
Previously we established an existing db share entry
associated with our pool under share import but
did not ensure that our retrieved db share belonged
to that pool. In a circumstance where an as yet un-
scanned pool contains a share by the same name we
could inadvertently update the wrong db share entry.
phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Sep 23, 2017
…#1828

The specific mount command, when run not directly
after the Share.DoesNotExist exception block execution,
would have a pool variable out of scope / undefined.
phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Sep 24, 2017
Previously misleading re return type /content
and executed commands. Improve to help future
maintenance.
phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Sep 24, 2017
phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Sep 24, 2017
We have duplicate code and an expectation to use
the same code in other locations. Abstract and
explain in dedicated docstring.
phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Sep 25, 2017
Previously we neglected to update (on import) prior
unknown and moved, from one pool to another, shares.
N.B. also update new share with rusage, eusage, pqgroup_rusage
and pqgroup_eusage on initial create.
@phillxnet
Copy link
Member Author

I have a little more testing to do on the code / commits as associated and then I'll prepare the pull request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants