Skip to content

Commit

Permalink
Merge pull request #436 from splunk/develop
Browse files Browse the repository at this point in the history
Release/8.1.0.1 + 7.3.8
  • Loading branch information
alishamayor authored Nov 23, 2020
2 parents 7adf4cb + 4ec43b1 commit bf8b219
Show file tree
Hide file tree
Showing 6 changed files with 82 additions and 14 deletions.
4 changes: 2 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@ SPLUNK_ANSIBLE_BRANCH ?= develop
SPLUNK_COMPOSE ?= cluster_absolute_unit.yaml
# Set Splunk version/build parameters here to define downstream URLs and file names
SPLUNK_PRODUCT := splunk
SPLUNK_VERSION := 8.1.0
SPLUNK_BUILD := f57c09e87251
SPLUNK_VERSION := 8.1.0.1
SPLUNK_BUILD := 24fd52428b5a
ifeq ($(shell arch), s390x)
SPLUNK_ARCH = s390x
else
Expand Down
3 changes: 3 additions & 0 deletions base/centos-7/install.sh
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,9 @@ echo "
## Allows people in group sudo to run all commands
%sudo ALL=(ALL) ALL" >> /etc/sudoers

# Remove nproc limits
rm -rf /etc/security/limits.d/20-nproc.conf

# Clean
yum clean all
rm -rf /install.sh /anaconda-post.log /var/log/anaconda/*
2 changes: 1 addition & 1 deletion base/redhat-8/install.sh
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ microdnf -y --nodocs install wget sudo shadow-utils procps tar tzdata
#install busybox direct from the multiarch since epel isn't availible yet for redhat8
wget -O /bin/busybox https://busybox.net/downloads/binaries/1.28.1-defconfig-multiarch/busybox-`arch`
chmod +x /bin/busybox
microdnf -y --nodocs update gnutls kernel-headers
microdnf -y --nodocs update gnutls kernel-headers librepo libnghttp2
microdnf -y --nodocs install python2-pip python2-devel redhat-rpm-config gcc libffi-devel openssl-devel
pip2 --no-cache-dir install requests ansible jmespath
microdnf -y remove gcc openssl-devel redhat-rpm-config python2-devel device-mapper-libs device-mapper trousers \
Expand Down
27 changes: 27 additions & 0 deletions docs/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

## Navigation

* [8.1.0.1](#8101)
* [8.1.0](#810)
* [8.0.7](#807)
* [8.0.6](#806)
Expand All @@ -14,6 +15,7 @@
* [8.0.2](#802)
* [8.0.1](#801)
* [8.0.0](#800)
* [7.3.8](#738)
* [7.3.7](#737)
* [7.3.6](#736)
* [7.3.5](#735)
Expand All @@ -39,6 +41,20 @@

---

## 8.1.0.1

#### What's New?
* Releasing new images to support Splunk Enterprise maintenance patch.

#### docker-splunk changes:
* Bumping Splunk version. For details, see [Fixed issues for 8.1.0.1](https://docs.splunk.com/Documentation/Splunk/8.1.0/ReleaseNotes/Fixedissues)
* Updated RH8 packages per vulnerability scan

#### splunk-ansible changes:
* Bugfixes and cleanup

---

## 8.1.0

#### What's New?
Expand Down Expand Up @@ -253,6 +269,17 @@

---

## 7.3.8

#### What's New?
* New Splunk Enterprise maintenance patch. For details, see [Fixed issues for 7.3.8](https://docs.splunk.com/Documentation/Splunk/7.3.8/ReleaseNotes/Fixedissues)
* Bundling in changes to be consistent with the release of [8.1.0.1](#8101)

#### Changes
* See [8.1.0.1](#8101) changes

---

## 7.3.7

#### What's New?
Expand Down
2 changes: 1 addition & 1 deletion docs/SUPPORT.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ The following prerequisites and dependencies must be installed on each node you
* Chipset:
* `splunk/splunk` image supports x86-64 chipsets
* `splunk/universalforwarder` image supports both x86-64 and s390x chipsets
* Kernel version 4.0 or higher
* Kernel version 4.x
* Docker engine:
* Docker Enterprise Engine 17.06.2 or higher
* Docker Community Engine 17.06.2 or higher
Expand Down
58 changes: 48 additions & 10 deletions tests/test_single_splunk_image.py
Original file line number Diff line number Diff line change
Expand Up @@ -81,12 +81,31 @@ def test_splunk_scloud(self):
time.sleep(5)
# If the container is still running, we should be able to exec inside
# Check that the version returns successfully for multiple users
exec_command = self.client.exec_create(cid, "scloud version", user="splunk")
std_out = self.client.exec_start(exec_command)
assert "scloud version " in std_out
exec_command = self.client.exec_create(cid, "scloud version", user="ansible")
for user in ["splunk", "ansible"]:
exec_command = self.client.exec_create(cid, "scloud version", user=user)
std_out = self.client.exec_start(exec_command)
assert "scloud version " in std_out
except Exception as e:
self.logger.error(e)
raise e
finally:
if cid:
self.client.remove_container(cid, v=True, force=True)

def test_splunk_ulimit(self):
cid = None
try:
# Run container
cid = self.client.create_container(self.SPLUNK_IMAGE_NAME, tty=True, command="no-provision")
cid = cid.get("Id")
self.client.start(cid)
# Wait a bit
time.sleep(5)
# If the container is still running, we should be able to exec inside
# Check that nproc limits are unlimited
exec_command = self.client.exec_create(cid, "sudo -u splunk bash -c 'ulimit -u'")
std_out = self.client.exec_start(exec_command)
assert "scloud version " in std_out
assert "unlimited" in std_out
except Exception as e:
self.logger.error(e)
raise e
Expand Down Expand Up @@ -2635,12 +2654,31 @@ def test_uf_scloud(self):
time.sleep(5)
# If the container is still running, we should be able to exec inside
# Check that the version returns successfully for multiple users
exec_command = self.client.exec_create(cid, "scloud version", user="splunk")
std_out = self.client.exec_start(exec_command)
assert "scloud version " in std_out
exec_command = self.client.exec_create(cid, "scloud version", user="ansible")
for user in ["splunk", "ansible"]:
exec_command = self.client.exec_create(cid, "scloud version", user=user)
std_out = self.client.exec_start(exec_command)
assert "scloud version " in std_out
except Exception as e:
self.logger.error(e)
raise e
finally:
if cid:
self.client.remove_container(cid, v=True, force=True)

def test_uf_ulimit(self):
cid = None
try:
# Run container
cid = self.client.create_container(self.UF_IMAGE_NAME, tty=True, command="no-provision")
cid = cid.get("Id")
self.client.start(cid)
# Wait a bit
time.sleep(5)
# If the container is still running, we should be able to exec inside
# Check that nproc limits are unlimited
exec_command = self.client.exec_create(cid, "sudo -u splunk bash -c 'ulimit -u'")
std_out = self.client.exec_start(exec_command)
assert "scloud version " in std_out
assert "unlimited" in std_out
except Exception as e:
self.logger.error(e)
raise e
Expand Down

0 comments on commit bf8b219

Please sign in to comment.