You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since this commit: django/django@a7c256c#diff-87c0869f58253f571c08ccf0fc5c7465 (included in Django version 1.8), the get_available_name() method on a storage now does two things instead of one: checks if filename is unique, and checks if it's not too long.
Even though simple handling of max_length was introduced in this commit: 4979dd0 (included in storages version 1.3+), it's only invoked when AWS_S3_FILE_OVERWRITE is False.
When AWS_S3_FILE_OVERWRITE is True, the storages.backends.s3boto.S3BotoStorage.get_available_name() method simply fixes potential issues with slashes in the filename and returns it, since it assumes its only job is to check if the filename is unique. Thus, on a site using the S3BotoStorage backend with overwriting turned on (which is the default), uploading a file with a filename longer than the default FileField length (100 characters) will result in a DataError from the database.
The obvious fix is to add truncation logic similar to Django's Storage.get_available_name into the overridden method in S3BotoStorage. Note that using Django's get_random_string(7) suffix is probably not appropriate, because then files would never get overwritten (and overwriting is presumably desirable behaviour if the overwrite setting is on), since the suffix will always be different. A hash of the full filename should be used as a suffix instead.
The text was updated successfully, but these errors were encountered:
That just happens to me, I think that should be nice if the _clean_name moves out from the get_available_name method, in other to uncouple both logics and prevent this behave. the AWS_S3_FILE_OVERWRITE settings should not affect the max_length from the field. It was causing serious DataError exception on my app.
Since this commit: django/django@a7c256c#diff-87c0869f58253f571c08ccf0fc5c7465 (included in Django version 1.8), the
get_available_name()
method on a storage now does two things instead of one: checks if filename is unique, and checks if it's not too long.Even though simple handling of
max_length
was introduced in this commit: 4979dd0 (included in storages version 1.3+), it's only invoked whenAWS_S3_FILE_OVERWRITE
isFalse
.When
AWS_S3_FILE_OVERWRITE
isTrue
, thestorages.backends.s3boto.S3BotoStorage.get_available_name()
method simply fixes potential issues with slashes in the filename and returns it, since it assumes its only job is to check if the filename is unique. Thus, on a site using the S3BotoStorage backend with overwriting turned on (which is the default), uploading a file with a filename longer than the default FileField length (100 characters) will result in a DataError from the database.The obvious fix is to add truncation logic similar to Django's
Storage.get_available_name
into the overridden method inS3BotoStorage
. Note that using Django'sget_random_string(7)
suffix is probably not appropriate, because then files would never get overwritten (and overwriting is presumably desirable behaviour if the overwrite setting is on), since the suffix will always be different. A hash of the full filename should be used as a suffix instead.The text was updated successfully, but these errors were encountered: