add influxdb2 #27
|
@ -1,6 +1,3 @@
|
|||
.vscode/
|
||||
.vault-pass
|
||||
.vault.yml
|
||||
.passbolt.yml
|
||||
inventories/local
|
||||
venv
|
||||
.vaultpass
|
||||
.pyenv
|
||||
|
|
|
@ -1,8 +0,0 @@
|
|||
PASSBOLT_BASE_URL: https://passbolt.domain.local/
|
||||
PASSBOLT_PASSPHRASE: "S3cr3tP4$$w0rd"
|
||||
PASSBOLT_PRIVATE_KEY: |
|
||||
-----BEGIN PGP PRIVATE KEY BLOCK-----
|
||||
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
|
||||
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
|
||||
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
|
||||
-----END PGP PRIVATE KEY BLOCK-----
|
17
LICENSE
17
LICENSE
|
@ -1,17 +0,0 @@
|
|||
Copyright (C) 2024 - Verdnatura Levante S.L.
|
||||
|
||||
This package is free software; you can redistribute it and/or modify
|
||||
it under the terms of the GNU General Public License as published by
|
||||
the Free Software Foundation, either version 3 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU General Public License
|
||||
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
On Debian systems, the complete text of the GNU General Public
|
||||
License can be found in "/usr/share/common-licenses/GPL-3".
|
97
README.md
97
README.md
|
@ -2,101 +2,48 @@
|
|||
|
||||
Collection of Ansible playbooks used in the Verdnatura server farm.
|
||||
|
||||
## Setup Ansible
|
||||
## Install Ansible
|
||||
|
||||
### Debian
|
||||
|
||||
Install Ansible package.
|
||||
Instal Ansible on Debian.
|
||||
```
|
||||
apt install ansible
|
||||
```
|
||||
|
||||
### Python
|
||||
|
||||
Create a Python virtual environment.
|
||||
```
|
||||
python3 -m venv venv
|
||||
source venv/bin/activate
|
||||
pip install --upgrade pip ansible==10.1.0 ansible-builder==3.1.0
|
||||
```
|
||||
|
||||
Before running any Python dependent command, activate the virtual environment.
|
||||
```
|
||||
source venv/bin/activate
|
||||
```
|
||||
|
||||
Once you are done, deactivate the virtual environment.
|
||||
```
|
||||
deactivate
|
||||
```
|
||||
|
||||
### All platforms
|
||||
|
||||
Install dependencies.
|
||||
```
|
||||
pip install -r requirements.txt
|
||||
ansible-galaxy collection install -r collections/requirements.yml
|
||||
```
|
||||
|
||||
Create Python virtual environment.
|
||||
```
|
||||
python3 -m venv .pyenv
|
||||
source .pyenv/bin/activate
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
## Run playbook
|
||||
|
||||
Before merging changes into protected branches, playbooks should be tested
|
||||
locally to ensure they work properly. The *inventories/local* inventory is not
|
||||
uploaded to the repository and can be used for local testing. In any case, it
|
||||
is advisable to use a different repository to store inventories.
|
||||
locally to ensure they work properly.
|
||||
|
||||
Run playbook on inventory host.
|
||||
Launch playbook on the fly on a host not declared in the inventory.
|
||||
```
|
||||
ansible-playbook -i inventories/local -l <host> [-t tag1,tag2...] playbooks/ping.yml
|
||||
```
|
||||
|
||||
Run playbook on the fly on a host not declared in the inventory.
|
||||
```
|
||||
ansible-playbook -i <ip_or_hostname>, playbooks/ping.yml
|
||||
ansible-playbook -i <ip_or_hostname>, [-t tag1,tag2] playbooks/test.yml
|
||||
```
|
||||
|
||||
*Note the comma at the end of the hostname or IP.*
|
||||
|
||||
List available tags for playbook.
|
||||
## Manage vault
|
||||
|
||||
To manage Ansible vault place the password into *.vaultpass* file.
|
||||
|
||||
View or edit the vault file.
|
||||
```
|
||||
ansible-playbook playbooks/<playbook_name>.yml --list-tags
|
||||
ansible-vault {view,edit} --vault-pass-file .vaultpass vault.yml
|
||||
```
|
||||
|
||||
## Manage secrets
|
||||
|
||||
Secrets can be managed by using Ansible vault or an external keystore, Passbolt
|
||||
is used in this case. It is recommended to use an external keystore to avoid
|
||||
publicly exposing the secrets, even if they are encrypted.
|
||||
|
||||
When running playbooks that use any of the keystores mentioned above, the
|
||||
*run-playbook.sh* script can be used, it is an ovelay over the original
|
||||
*ansible-playbook* command which injects the necessary parameters.
|
||||
|
||||
### Passbolt
|
||||
|
||||
Add the necessary environment variables to the *.passbolt.yml* file, the
|
||||
template file *.passbolt.tpl.yml* is included as a reference:
|
||||
|
||||
* https://galaxy.ansible.com/ui/repo/published/anatomicjc/passbolt/docs/
|
||||
|
||||
### Ansible vault
|
||||
|
||||
To manage Ansible vault place the encryption password into *.vault-pass* file.
|
||||
|
||||
Manage the vault.
|
||||
```
|
||||
ansible-vault {view,edit,create} --vault-pass-file .vault-pass .vault.yml
|
||||
```
|
||||
|
||||
> The files used for the vault must only be used locally and
|
||||
> under **no** circumstances can they be uploaded to the repository.
|
||||
|
||||
## Build execution environment for AWX
|
||||
|
||||
Create an image with *ansible-builder* and upload it to registry.
|
||||
```
|
||||
ansible-builder build --tag awx-ee:vn1
|
||||
```
|
||||
When running playbooks that use the vault the *vault-playbook.sh* script can
|
||||
be used, it is ovelay over the original *ansible-playbook* command.
|
||||
|
||||
## Common playbooks
|
||||
|
||||
|
@ -110,7 +57,5 @@ ansible-builder build --tag awx-ee:vn1
|
|||
* https://docs.ansible.com/ansible/latest/reference_appendices/config.html
|
||||
* https://docs.ansible.com/ansible/latest/collections/ansible/builtin/gather_facts_module.html
|
||||
* https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_vars_facts.html
|
||||
* https://ansible.readthedocs.io/projects/builder/en/latest/
|
||||
* https://www.ansible.com/blog/introduction-to-ansible-builder/
|
||||
* https://github.com/ansible/awx-ee/
|
||||
* https://www.passbolt.com/blog/managing-secrets-in-ansible-using-passbolt
|
||||
* https://galaxy.ansible.com/ui/repo/published/anatomicjc/passbolt/
|
||||
|
|
|
@ -2,10 +2,9 @@
|
|||
remote_user = root
|
||||
host_key_checking = False
|
||||
roles_path = ./roles
|
||||
inventory = ./inventories/local
|
||||
inventory = ./inventories/servers
|
||||
gathering = smart
|
||||
interpreter_python = auto_silent
|
||||
deprecation_warnings = False
|
||||
|
||||
[privilege_escalation]
|
||||
become = True
|
||||
|
|
|
@ -1,19 +1,16 @@
|
|||
collections:
|
||||
- name: ansible.utils
|
||||
version: '>=4.1.0'
|
||||
- name: community.general
|
||||
version: '>=9.0.0'
|
||||
type: galaxy
|
||||
- name: ansible.posix
|
||||
version: '>=1.5.4'
|
||||
type: galaxy
|
||||
- name: ansible.utils
|
||||
version: '>=4.1.0'
|
||||
type: galaxy
|
||||
- name: ansible.windows
|
||||
version: '>=2.3.0'
|
||||
type: galaxy
|
||||
- name: anatomicjc.passbolt
|
||||
version: '>=0.0.14'
|
||||
type: galaxy
|
||||
- name: community.crypto
|
||||
version: '>=2.14.0'
|
||||
type: galaxy
|
||||
- name: community.general
|
||||
version: '>=9.5.0'
|
||||
type: galaxy
|
||||
|
|
|
@ -1,96 +0,0 @@
|
|||
ARG EE_BASE_IMAGE="quay.io/centos/centos:stream9"
|
||||
ARG PYCMD="/usr/bin/python3.12"
|
||||
ARG PYPKG="python3.12"
|
||||
ARG PKGMGR_PRESERVE_CACHE=""
|
||||
ARG ANSIBLE_GALAXY_CLI_COLLECTION_OPTS=""
|
||||
ARG ANSIBLE_GALAXY_CLI_ROLE_OPTS=""
|
||||
ARG ANSIBLE_INSTALL_REFS="ansible-core>=2.17.0 ansible-runner==2.4.0"
|
||||
ARG PKGMGR="/usr/bin/dnf"
|
||||
|
||||
# Base build stage
|
||||
FROM $EE_BASE_IMAGE as base
|
||||
USER root
|
||||
ENV PIP_BREAK_SYSTEM_PACKAGES=1
|
||||
ARG EE_BASE_IMAGE
|
||||
ARG PYCMD
|
||||
ARG PYPKG
|
||||
ARG PKGMGR_PRESERVE_CACHE
|
||||
ARG ANSIBLE_GALAXY_CLI_COLLECTION_OPTS
|
||||
ARG ANSIBLE_GALAXY_CLI_ROLE_OPTS
|
||||
ARG ANSIBLE_INSTALL_REFS
|
||||
ARG PKGMGR
|
||||
|
||||
COPY _build/scripts/ /output/scripts/
|
||||
COPY _build/scripts/entrypoint /opt/builder/bin/entrypoint
|
||||
RUN $PKGMGR install $PYPKG -y ; if [ -z $PKGMGR_PRESERVE_CACHE ]; then $PKGMGR clean all; fi
|
||||
RUN /output/scripts/pip_install $PYCMD
|
||||
RUN $PYCMD -m pip install --no-cache-dir $ANSIBLE_INSTALL_REFS
|
||||
|
||||
# Galaxy build stage
|
||||
FROM base as galaxy
|
||||
ARG EE_BASE_IMAGE
|
||||
ARG PYCMD
|
||||
ARG PYPKG
|
||||
ARG PKGMGR_PRESERVE_CACHE
|
||||
ARG ANSIBLE_GALAXY_CLI_COLLECTION_OPTS
|
||||
ARG ANSIBLE_GALAXY_CLI_ROLE_OPTS
|
||||
ARG ANSIBLE_INSTALL_REFS
|
||||
ARG PKGMGR
|
||||
|
||||
RUN /output/scripts/check_galaxy
|
||||
COPY _build /build
|
||||
WORKDIR /build
|
||||
|
||||
RUN mkdir -p /usr/share/ansible
|
||||
RUN ansible-galaxy role install $ANSIBLE_GALAXY_CLI_ROLE_OPTS -r requirements.yml --roles-path "/usr/share/ansible/roles"
|
||||
RUN ANSIBLE_GALAXY_DISABLE_GPG_VERIFY=1 ansible-galaxy collection install $ANSIBLE_GALAXY_CLI_COLLECTION_OPTS -r requirements.yml --collections-path "/usr/share/ansible/collections"
|
||||
|
||||
# Builder build stage
|
||||
FROM base as builder
|
||||
ENV PIP_BREAK_SYSTEM_PACKAGES=1
|
||||
WORKDIR /build
|
||||
ARG EE_BASE_IMAGE
|
||||
ARG PYCMD
|
||||
ARG PYPKG
|
||||
ARG PKGMGR_PRESERVE_CACHE
|
||||
ARG ANSIBLE_GALAXY_CLI_COLLECTION_OPTS
|
||||
ARG ANSIBLE_GALAXY_CLI_ROLE_OPTS
|
||||
ARG ANSIBLE_INSTALL_REFS
|
||||
ARG PKGMGR
|
||||
|
||||
RUN $PYCMD -m pip install --no-cache-dir bindep pyyaml packaging
|
||||
|
||||
COPY --from=galaxy /usr/share/ansible /usr/share/ansible
|
||||
|
||||
COPY _build/requirements.txt requirements.txt
|
||||
COPY _build/bindep.txt bindep.txt
|
||||
RUN $PYCMD /output/scripts/introspect.py introspect --user-pip=requirements.txt --user-bindep=bindep.txt --write-bindep=/tmp/src/bindep.txt --write-pip=/tmp/src/requirements.txt
|
||||
RUN /output/scripts/assemble
|
||||
|
||||
# Final build stage
|
||||
FROM base as final
|
||||
ENV PIP_BREAK_SYSTEM_PACKAGES=1
|
||||
ARG EE_BASE_IMAGE
|
||||
ARG PYCMD
|
||||
ARG PYPKG
|
||||
ARG PKGMGR_PRESERVE_CACHE
|
||||
ARG ANSIBLE_GALAXY_CLI_COLLECTION_OPTS
|
||||
ARG ANSIBLE_GALAXY_CLI_ROLE_OPTS
|
||||
ARG ANSIBLE_INSTALL_REFS
|
||||
ARG PKGMGR
|
||||
|
||||
RUN /output/scripts/check_ansible $PYCMD
|
||||
|
||||
COPY --from=galaxy /usr/share/ansible /usr/share/ansible
|
||||
|
||||
COPY --from=builder /output/ /output/
|
||||
RUN /output/scripts/install-from-bindep && rm -rf /output/wheels
|
||||
RUN chmod ug+rw /etc/passwd
|
||||
RUN mkdir -p /runner && chgrp 0 /runner && chmod -R ug+rwx /runner
|
||||
WORKDIR /runner
|
||||
RUN $PYCMD -m pip install --no-cache-dir 'dumb-init==1.2.5'
|
||||
RUN rm -rf /output
|
||||
LABEL ansible-execution-environment=true
|
||||
USER 1000
|
||||
ENTRYPOINT ["/opt/builder/bin/entrypoint", "dumb-init"]
|
||||
CMD ["bash"]
|
|
@ -1,18 +0,0 @@
|
|||
git-core [platform:rpm]
|
||||
python3.11-devel [platform:rpm compile]
|
||||
libcurl-devel [platform:rpm compile]
|
||||
krb5-devel [platform:rpm compile]
|
||||
krb5-workstation [platform:rpm]
|
||||
subversion [platform:rpm]
|
||||
subversion [platform:dpkg]
|
||||
git-lfs [platform:rpm]
|
||||
sshpass [platform:rpm]
|
||||
rsync [platform:rpm]
|
||||
epel-release [platform:rpm]
|
||||
unzip [platform:rpm]
|
||||
podman-remote [platform:rpm]
|
||||
cmake [platform:rpm compile]
|
||||
gcc [platform:rpm compile]
|
||||
gcc-c++ [platform:rpm compile]
|
||||
make [platform:rpm compile]
|
||||
openssl-devel [platform:rpm compile]
|
|
@ -1,3 +0,0 @@
|
|||
py-passbolt==0.0.18
|
||||
cryptography==3.3.2
|
||||
passlib==1.7.4
|
|
@ -1,16 +0,0 @@
|
|||
collections:
|
||||
- name: ansible.utils
|
||||
version: '>=4.1.0'
|
||||
type: galaxy
|
||||
- name: ansible.windows
|
||||
version: '>=2.3.0'
|
||||
type: galaxy
|
||||
- name: anatomicjc.passbolt
|
||||
version: '>=0.0.14'
|
||||
type: galaxy
|
||||
- name: community.crypto
|
||||
version: '>=2.14.0'
|
||||
type: galaxy
|
||||
- name: community.general
|
||||
version: '>=9.5.0'
|
||||
type: galaxy
|
|
@ -1,169 +0,0 @@
|
|||
#!/bin/bash
|
||||
# Copyright (c) 2019 Red Hat, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# Make a list of bindep dependencies and a collection of built binary
|
||||
# wheels for the repo in question as well as its python dependencies.
|
||||
# Install javascript tools as well to support python that needs javascript
|
||||
# at build time.
|
||||
set -ex
|
||||
|
||||
RELEASE=$(source /etc/os-release; echo $ID)
|
||||
|
||||
# NOTE(pabelanger): Allow users to force either microdnf or dnf as a package
|
||||
# manager.
|
||||
PKGMGR="${PKGMGR:-}"
|
||||
PKGMGR_OPTS="${PKGMGR_OPTS:-}"
|
||||
PKGMGR_PRESERVE_CACHE="${PKGMGR_PRESERVE_CACHE:-}"
|
||||
|
||||
PYCMD="${PYCMD:=/usr/bin/python3}"
|
||||
PIPCMD="${PIPCMD:=$PYCMD -m pip}"
|
||||
|
||||
if [ -z $PKGMGR ]; then
|
||||
# Expect dnf to be installed, however if we find microdnf default to it.
|
||||
PKGMGR=/usr/bin/dnf
|
||||
if [ -f "/usr/bin/microdnf" ]; then
|
||||
PKGMGR=/usr/bin/microdnf
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ "$PKGMGR" = "/usr/bin/microdnf" ]
|
||||
then
|
||||
if [ -z "${PKGMGR_OPTS}" ]; then
|
||||
# NOTE(pabelanger): skip install docs and weak dependencies to
|
||||
# make smaller images. Sadly, setting these in dnf.conf don't
|
||||
# appear to work.
|
||||
PKGMGR_OPTS="--nodocs --setopt install_weak_deps=0"
|
||||
fi
|
||||
fi
|
||||
|
||||
# NOTE(pabelanger): Ensure all the directory we use exists regardless
|
||||
# of the user first creating them or not.
|
||||
mkdir -p /output/bindep
|
||||
mkdir -p /output/wheels
|
||||
mkdir -p /tmp/src
|
||||
|
||||
cd /tmp/src
|
||||
|
||||
function install_bindep {
|
||||
# Protect from the bindep builder image use of the assemble script
|
||||
# to produce a wheel. Note we append because we want all
|
||||
# sibling packages in here too
|
||||
if [ -f bindep.txt ] ; then
|
||||
bindep -l newline | sort >> /output/bindep/run.txt || true
|
||||
if [ "$RELEASE" == "centos" ] ; then
|
||||
bindep -l newline -b epel | sort >> /output/bindep/stage.txt || true
|
||||
grep -Fxvf /output/bindep/run.txt /output/bindep/stage.txt >> /output/bindep/epel.txt || true
|
||||
rm -rf /output/bindep/stage.txt
|
||||
fi
|
||||
compile_packages=$(bindep -b compile || true)
|
||||
if [ ! -z "$compile_packages" ] ; then
|
||||
$PKGMGR install -y $PKGMGR_OPTS ${compile_packages}
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
function install_wheels {
|
||||
# NOTE(pabelanger): If there are build requirements to install, do so.
|
||||
# However do not cache them as we do not want them in the final image.
|
||||
if [ -f /tmp/src/build-requirements.txt ] && [ ! -f /tmp/src/.build-requirements.txt ] ; then
|
||||
$PIPCMD install $CONSTRAINTS $PIP_OPTS --no-cache -r /tmp/src/build-requirements.txt
|
||||
touch /tmp/src/.build-requirements.txt
|
||||
fi
|
||||
# Build a wheel so that we have an install target.
|
||||
# pip install . in the container context with the mounted
|
||||
# source dir gets ... exciting, if setup.py exists.
|
||||
# We run sdist first to trigger code generation steps such
|
||||
# as are found in zuul, since the sequencing otherwise
|
||||
# happens in a way that makes wheel content copying unhappy.
|
||||
# pip wheel isn't used here because it puts all of the output
|
||||
# in the output dir and not the wheel cache, so it's not
|
||||
# possible to tell what is the wheel for the project and
|
||||
# what is the wheel cache.
|
||||
if [ -f setup.py ] ; then
|
||||
$PYCMD setup.py sdist bdist_wheel -d /output/wheels
|
||||
fi
|
||||
|
||||
# Install everything so that the wheel cache is populated with
|
||||
# transitive depends. If a requirements.txt file exists, install
|
||||
# it directly so that people can use git url syntax to do things
|
||||
# like pick up patched but unreleased versions of dependencies.
|
||||
# Only do this for the main package (i.e. only write requirements
|
||||
# once).
|
||||
if [ -f /tmp/src/requirements.txt ] && [ ! -f /output/requirements.txt ] ; then
|
||||
$PIPCMD install $CONSTRAINTS $PIP_OPTS --cache-dir=/output/wheels -r /tmp/src/requirements.txt
|
||||
cp /tmp/src/requirements.txt /output/requirements.txt
|
||||
fi
|
||||
# If we didn't build wheels, we can skip trying to install it.
|
||||
if [ $(ls -1 /output/wheels/*whl 2>/dev/null | wc -l) -gt 0 ]; then
|
||||
$PIPCMD uninstall -y /output/wheels/*.whl
|
||||
$PIPCMD install $CONSTRAINTS $PIP_OPTS --cache-dir=/output/wheels /output/wheels/*whl
|
||||
fi
|
||||
}
|
||||
|
||||
PACKAGES=$*
|
||||
PIP_OPTS="${PIP_OPTS-}"
|
||||
|
||||
# bindep the main package
|
||||
install_bindep
|
||||
|
||||
# go through ZUUL_SIBLINGS, if any, and build those wheels too
|
||||
for sibling in ${ZUUL_SIBLINGS:-}; do
|
||||
pushd .zuul-siblings/${sibling}
|
||||
install_bindep
|
||||
popd
|
||||
done
|
||||
|
||||
# Use a clean virtualenv for install steps to prevent things from the
|
||||
# current environment making us not build a wheel.
|
||||
# NOTE(pabelanger): We allow users to install distro python packages of
|
||||
# libraries. This is important for projects that eventually want to produce
|
||||
# an RPM or offline install.
|
||||
$PYCMD -m venv /tmp/venv --system-site-packages --without-pip
|
||||
source /tmp/venv/bin/activate
|
||||
|
||||
# If there is an upper-constraints.txt file in the source tree,
|
||||
# use it in the pip commands.
|
||||
if [ -f /tmp/src/upper-constraints.txt ] ; then
|
||||
cp /tmp/src/upper-constraints.txt /output/upper-constraints.txt
|
||||
CONSTRAINTS="-c /tmp/src/upper-constraints.txt"
|
||||
fi
|
||||
|
||||
# If we got a list of packages, install them, otherwise install the
|
||||
# main package.
|
||||
if [[ $PACKAGES ]] ; then
|
||||
$PIPCMD install $CONSTRAINTS $PIP_OPTS --cache-dir=/output/wheels $PACKAGES
|
||||
for package in $PACKAGES ; do
|
||||
echo "$package" >> /output/packages.txt
|
||||
done
|
||||
else
|
||||
install_wheels
|
||||
fi
|
||||
|
||||
# go through ZUUL_SIBLINGS, if any, and build those wheels too
|
||||
for sibling in ${ZUUL_SIBLINGS:-}; do
|
||||
pushd .zuul-siblings/${sibling}
|
||||
install_wheels
|
||||
popd
|
||||
done
|
||||
|
||||
if [ -z $PKGMGR_PRESERVE_CACHE ]; then
|
||||
$PKGMGR clean all
|
||||
rm -rf /var/cache/{dnf,yum}
|
||||
fi
|
||||
|
||||
rm -rf /var/lib/dnf/history.*
|
||||
rm -rf /var/log/{dnf.*,hawkey.log}
|
||||
rm -rf /tmp/venv
|
|
@ -1,110 +0,0 @@
|
|||
#!/bin/bash
|
||||
# Copyright (c) 2023 Red Hat, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
#####################################################################
|
||||
# Script to validate that Ansible and Ansible Runner are installed.
|
||||
#
|
||||
# Usage: check_ansible <PYCMD>
|
||||
#
|
||||
# Options:
|
||||
# PYCMD - The path to the python executable to use.
|
||||
#####################################################################
|
||||
|
||||
set -x
|
||||
|
||||
PYCMD=$1
|
||||
|
||||
if [ -z "$PYCMD" ]
|
||||
then
|
||||
echo "Usage: check_ansible <PYCMD>"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ ! -x "$PYCMD" ]
|
||||
then
|
||||
echo "$PYCMD is not an executable"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
ansible --version
|
||||
|
||||
if [ $? -ne 0 ]
|
||||
then
|
||||
cat<<EOF
|
||||
**********************************************************************
|
||||
ERROR - Missing Ansible installation
|
||||
|
||||
An Ansible installation cannot be found in the final builder image.
|
||||
|
||||
Ansible must be installed in the final image. If you are using a
|
||||
recent enough version of the execution environment file, you may
|
||||
use the 'dependencies.ansible_core' configuration option to install
|
||||
Ansible for you, or use 'additional_build_steps' to manually do
|
||||
this yourself. Alternatively, use a base image with Ansible already
|
||||
installed.
|
||||
**********************************************************************
|
||||
EOF
|
||||
exit 1
|
||||
fi
|
||||
|
||||
ansible-runner --version
|
||||
|
||||
if [ $? -ne 0 ]
|
||||
then
|
||||
cat<<EOF
|
||||
**********************************************************************
|
||||
ERROR - Missing Ansible Runner installation
|
||||
|
||||
An Ansible Runner installation cannot be found in the final builder
|
||||
image.
|
||||
|
||||
Ansible Runner must be installed in the final image. If you are
|
||||
using a recent enough version of the execution environment file, you
|
||||
may use the 'dependencies.ansible_runner' configuration option to
|
||||
install Ansible Runner for you, or use 'additional_build_steps' to
|
||||
manually do this yourself. Alternatively, use a base image with
|
||||
Ansible Runner already installed.
|
||||
**********************************************************************
|
||||
EOF
|
||||
exit 1
|
||||
fi
|
||||
|
||||
$PYCMD -c 'import ansible ; import ansible_runner'
|
||||
|
||||
if [ $? -ne 0 ]
|
||||
then
|
||||
cat<<EOF
|
||||
**********************************************************************
|
||||
ERROR - Missing Ansible or Ansible Runner for selected Python
|
||||
|
||||
An Ansible and/or Ansible Runner installation cannot be found in
|
||||
the final builder image using the following Python interpreter:
|
||||
|
||||
$PYCMD
|
||||
|
||||
Ansible and Ansible Runner must be installed in the final image and
|
||||
available to the selected Python interpreter. If you are using a
|
||||
recent enough version of the execution environment file, you may use
|
||||
the 'dependencies.ansible_core' configuration option to install
|
||||
Ansible and the 'dependencies.ansible_runner' configuration option
|
||||
to install Ansible Runner. You can also use 'additional_build_steps'
|
||||
to manually do this yourself. Alternatively, use a base image with
|
||||
Ansible and Ansible Runner already installed.
|
||||
**********************************************************************
|
||||
EOF
|
||||
exit 1
|
||||
fi
|
||||
exit 0
|
|
@ -1,46 +0,0 @@
|
|||
#!/bin/bash
|
||||
# Copyright (c) 2023 Red Hat, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
#####################################################################
|
||||
# Script to validate that Ansible Galaxy is installed on the system.
|
||||
#####################################################################
|
||||
|
||||
set -x
|
||||
|
||||
ansible-galaxy --version
|
||||
|
||||
if [ $? -ne 0 ]
|
||||
then
|
||||
cat<<EOF
|
||||
**********************************************************************
|
||||
ERROR - Missing Ansible installation
|
||||
|
||||
The 'ansible-galaxy' command is not found in the base image. This
|
||||
image is used to create the intermediary image that performs the
|
||||
Galaxy collection and role installation process.
|
||||
|
||||
Ansible must be installed in the base image. If you are using a
|
||||
recent enough version of the execution environment file, you may
|
||||
use the 'dependencies.ansible_core' configuration option to install
|
||||
Ansible for you, or use 'additional_build_steps' to manually do
|
||||
this yourself. Alternatively, use a base image with Ansible already
|
||||
installed.
|
||||
**********************************************************************
|
||||
EOF
|
||||
exit 1
|
||||
fi
|
||||
|
||||
exit 0
|
|
@ -1,152 +0,0 @@
|
|||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright: (c) 2023, Ansible Project
|
||||
# Apache License, Version 2.0 (see LICENSE.md or https://www.apache.org/licenses/LICENSE-2.0)
|
||||
|
||||
# This entrypoint script papers over a number of problems that manifest under different container runtimes when
|
||||
# using ephemeral UIDs, then chain-execs to the requested init system and/or command. It is an implementation
|
||||
# detail for the convenience of Ansible execution environments built by ansible-builder.
|
||||
#
|
||||
# If we're running as a legit user that has an entry in /etc/passwd and a valid and writeable homedir, we're all good.
|
||||
#
|
||||
# If the current uid is not in /etc/passwd, we'll attempt to add it, but /etc/passwd is often not writable by GID 0.
|
||||
# `ansible-builder` defaults to making /etc/passwd writable by GID0 by default for maximum compatibility, but this is
|
||||
# not guaranteed. Some runtimes/wrappers (eg podman, cri-o) already create an /etc/passwd entry on the fly as-needed,
|
||||
# but they may set the homedir to something inaccessible (eg, `/`, WORKDIR).
|
||||
#
|
||||
# There are numerous cases where a missing or incorrect homedir in /etc/passwd are fatal. It breaks
|
||||
# `async` in ansible-core, things like `echo ~someuid`, and numerous other software packages that assume a valid POSIX
|
||||
# user configuration.
|
||||
#
|
||||
# If the homedir listed in /etc/passwd is not writeable by the current user (supposed to be primary GID0), we'll try
|
||||
# to make it writeable (except `/`), or select another writeable home directory from `$HOME`, `/runner`, or `/tmp` and
|
||||
# update $HOME (and /etc/passwd if possible) accordingly for the current process chain.
|
||||
#
|
||||
# This script is generally silent by default, but some likely-fatal cases will issue a brief warning to stderr. The
|
||||
# envvars described below can be set before container init to cause faster failures and/or get tracing output.
|
||||
|
||||
# options:
|
||||
# EP_BASH_DEBUG=1 (enable set -x)
|
||||
# EP_DEBUG_TRACE=1 (enable debug trace to stderr)
|
||||
# EP_ON_ERROR=ignore/warn/fail (default ignore)
|
||||
|
||||
set -eu
|
||||
|
||||
if (( "${EP_BASH_DEBUG:=0}" == 1 )); then
|
||||
set -x
|
||||
fi
|
||||
|
||||
: "${EP_DEBUG_TRACE:=0}"
|
||||
: "${EP_ON_ERROR:=warn}"
|
||||
: "${HOME:=}"
|
||||
CUR_UID=$(id -u)
|
||||
CUR_USERNAME=$(id -u -n 2> /dev/null || true) # whoami-free way to get current username, falls back to current uid
|
||||
|
||||
DEFAULT_HOME="/runner"
|
||||
DEFAULT_SHELL="/bin/bash"
|
||||
|
||||
if (( "$EP_DEBUG_TRACE" == 1 )); then
|
||||
function log_debug() { echo "EP_DEBUG: $1" 1>&2; }
|
||||
else
|
||||
function log_debug() { :; }
|
||||
fi
|
||||
|
||||
log_debug "entrypoint.sh started"
|
||||
|
||||
case "$EP_ON_ERROR" in
|
||||
"fail")
|
||||
function maybe_fail() { echo "EP_FAIL: $1" 1>&2; exit 1; }
|
||||
;;
|
||||
"warn")
|
||||
function maybe_fail() { echo "EP_WARN: $1" 1>&2; }
|
||||
;;
|
||||
*)
|
||||
function maybe_fail() { log_debug "EP_FAIL (ignored): $1"; }
|
||||
;;
|
||||
esac
|
||||
|
||||
function is_dir_writable() {
|
||||
[ -d "$1" ] && [ -w "$1" ] && [ -x "$1" ]
|
||||
}
|
||||
|
||||
function ensure_current_uid_in_passwd() {
|
||||
log_debug "is current uid ${CUR_UID} in /etc/passwd?"
|
||||
|
||||
if ! getent passwd "${CUR_USERNAME}" &> /dev/null ; then
|
||||
if [ -w "/etc/passwd" ]; then
|
||||
log_debug "appending missing uid ${CUR_UID} into /etc/passwd"
|
||||
# use the default homedir; we may have to rewrite it to another value later if it's inaccessible
|
||||
echo "${CUR_UID}:x:${CUR_UID}:0:container user ${CUR_UID}:${DEFAULT_HOME}:${DEFAULT_SHELL}" >> /etc/passwd
|
||||
else
|
||||
maybe_fail "uid ${CUR_UID} is missing from /etc/passwd, which is not writable; this error is likely fatal"
|
||||
fi
|
||||
else
|
||||
log_debug "current uid is already in /etc/passwd"
|
||||
fi
|
||||
}
|
||||
|
||||
function ensure_writeable_homedir() {
|
||||
if (is_dir_writable "${CANDIDATE_HOME}") ; then
|
||||
log_debug "candidate homedir ${CANDIDATE_HOME} is valid and writeable"
|
||||
else
|
||||
if [ "${CANDIDATE_HOME}" == "/" ]; then
|
||||
log_debug "skipping attempt to fix permissions on / as homedir"
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_debug "candidate homedir ${CANDIDATE_HOME} is missing or not writeable; attempt to fix"
|
||||
if ! (mkdir -p "${CANDIDATE_HOME}" >& /dev/null && chmod -R ug+rwx "${CANDIDATE_HOME}" >& /dev/null) ; then
|
||||
log_debug "candidate homedir ${CANDIDATE_HOME} cannot be made writeable"
|
||||
return 1
|
||||
else
|
||||
log_debug "candidate homedir ${CANDIDATE_HOME} was successfully made writeable"
|
||||
fi
|
||||
fi
|
||||
|
||||
# this might work; export it even if we end up not being able to update /etc/passwd
|
||||
# this ensures the envvar matches current reality for this session; future sessions should set automatically if /etc/passwd is accurate
|
||||
export HOME=${CANDIDATE_HOME}
|
||||
|
||||
if [ "${CANDIDATE_HOME}" == "${PASSWD_HOME}" ] ; then
|
||||
log_debug "candidate homedir ${CANDIDATE_HOME} matches /etc/passwd"
|
||||
return 0
|
||||
fi
|
||||
|
||||
if ! [ -w /etc/passwd ]; then
|
||||
log_debug "candidate homedir ${CANDIDATE_HOME} is valid for ${CUR_USERNAME}, but /etc/passwd is not writable to update it"
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_debug "resetting homedir for user ${CUR_USERNAME} to ${CANDIDATE_HOME} in /etc/passwd"
|
||||
|
||||
# sed -i wants to create a tempfile next to the original, which won't work with /etc permissions in many cases,
|
||||
# so just do it in memory and overwrite the existing file if we succeeded
|
||||
NEWPW=$(sed -r "s;(^${CUR_USERNAME}:(.*:){4})(.*:);\1${CANDIDATE_HOME}:;g" /etc/passwd)
|
||||
echo "${NEWPW}" > /etc/passwd
|
||||
}
|
||||
|
||||
ensure_current_uid_in_passwd
|
||||
|
||||
log_debug "current value of HOME is ${HOME}"
|
||||
|
||||
PASSWD_HOME=$(getent passwd "${CUR_USERNAME}" | cut -d: -f6)
|
||||
log_debug "user ${CUR_USERNAME} homedir from /etc/passwd is ${PASSWD_HOME}"
|
||||
|
||||
CANDIDATE_HOMES=("${PASSWD_HOME}" "${HOME}" "${DEFAULT_HOME}" "/tmp")
|
||||
|
||||
# we'll set this in the loop as soon as we find a writeable dir
|
||||
unset HOME
|
||||
|
||||
for CANDIDATE_HOME in "${CANDIDATE_HOMES[@]}"; do
|
||||
if ensure_writeable_homedir ; then
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
||||
if ! [ -v HOME ] ; then
|
||||
maybe_fail "a valid homedir could not be set for ${CUR_USERNAME}; this is likely fatal"
|
||||
fi
|
||||
|
||||
# chain exec whatever we were asked to run (ideally an init system) to keep any envvar state we've set
|
||||
log_debug "chain exec-ing requested command $*"
|
||||
exec "${@}"
|
|
@ -1,105 +0,0 @@
|
|||
#!/bin/bash
|
||||
# Copyright (c) 2019 Red Hat, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
set -ex
|
||||
# NOTE(pabelanger): Allow users to force either microdnf or dnf as a package
|
||||
# manager.
|
||||
PKGMGR="${PKGMGR:-}"
|
||||
PKGMGR_OPTS="${PKGMGR_OPTS:-}"
|
||||
PKGMGR_PRESERVE_CACHE="${PKGMGR_PRESERVE_CACHE:-}"
|
||||
|
||||
PYCMD="${PYCMD:=/usr/bin/python3}"
|
||||
PIPCMD="${PIPCMD:=$PYCMD -m pip}"
|
||||
PIP_OPTS="${PIP_OPTS-}"
|
||||
|
||||
if [ -z $PKGMGR ]; then
|
||||
# Expect dnf to be installed, however if we find microdnf default to it.
|
||||
PKGMGR=/usr/bin/dnf
|
||||
if [ -f "/usr/bin/microdnf" ]; then
|
||||
PKGMGR=/usr/bin/microdnf
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ "$PKGMGR" = "/usr/bin/microdnf" ]
|
||||
then
|
||||
if [ -z "${PKGMGR_OPTS}" ]; then
|
||||
# NOTE(pabelanger): skip install docs and weak dependencies to
|
||||
# make smaller images. Sadly, setting these in dnf.conf don't
|
||||
# appear to work.
|
||||
PKGMGR_OPTS="--nodocs --setopt install_weak_deps=0"
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ -f /output/bindep/run.txt ] ; then
|
||||
PACKAGES=$(cat /output/bindep/run.txt)
|
||||
if [ ! -z "$PACKAGES" ]; then
|
||||
$PKGMGR install -y $PKGMGR_OPTS $PACKAGES
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ -f /output/bindep/epel.txt ] ; then
|
||||
EPEL_PACKAGES=$(cat /output/bindep/epel.txt)
|
||||
if [ ! -z "$EPEL_PACKAGES" ]; then
|
||||
$PKGMGR install -y $PKGMGR_OPTS --enablerepo epel $EPEL_PACKAGES
|
||||
fi
|
||||
fi
|
||||
|
||||
# If there's a constraints file, use it.
|
||||
if [ -f /output/upper-constraints.txt ] ; then
|
||||
CONSTRAINTS="-c /output/upper-constraints.txt"
|
||||
fi
|
||||
|
||||
# If a requirements.txt file exists,
|
||||
# install it directly so that people can use git url syntax
|
||||
# to do things like pick up patched but unreleased versions
|
||||
# of dependencies.
|
||||
if [ -f /output/requirements.txt ] ; then
|
||||
$PIPCMD install $CONSTRAINTS $PIP_OPTS --cache-dir=/output/wheels -r /output/requirements.txt
|
||||
fi
|
||||
|
||||
# Add any requested extras to the list of things to install
|
||||
EXTRAS=""
|
||||
for extra in $* ; do
|
||||
EXTRAS="${EXTRAS} -r /output/$extra/requirements.txt"
|
||||
done
|
||||
|
||||
if [ -f /output/packages.txt ] ; then
|
||||
# If a package list was passed to assemble, install that in the final
|
||||
# image.
|
||||
$PIPCMD install $CONSTRAINTS $PIP_OPTS --cache-dir=/output/wheels -r /output/packages.txt $EXTRAS
|
||||
else
|
||||
# Install the wheels. Uninstall any existing version as siblings maybe
|
||||
# be built with the same version number as the latest release, but we
|
||||
# really want the speculatively built wheels installed over any
|
||||
# automatic dependencies.
|
||||
# NOTE(pabelanger): It is possible a project may not have a wheel, but does have requirements.txt
|
||||
if [ $(ls -1 /output/wheels/*whl 2>/dev/null | wc -l) -gt 0 ]; then
|
||||
$PIPCMD uninstall -y /output/wheels/*.whl
|
||||
$PIPCMD install $CONSTRAINTS $PIP_OPTS --cache-dir=/output/wheels /output/wheels/*.whl $EXTRAS
|
||||
elif [ ! -z "$EXTRAS" ] ; then
|
||||
$PIPCMD uninstall -y $EXTRAS
|
||||
$PIPCMD install $CONSTRAINTS $PIP_OPTS --cache-dir=/output/wheels $EXTRAS
|
||||
fi
|
||||
fi
|
||||
|
||||
# clean up after ourselves, unless requested to keep the cache
|
||||
if [[ "$PKGMGR_PRESERVE_CACHE" != always ]]; then
|
||||
$PKGMGR clean all
|
||||
rm -rf /var/cache/{dnf,yum}
|
||||
fi
|
||||
|
||||
rm -rf /var/lib/dnf/history.*
|
||||
rm -rf /var/log/{dnf.*,hawkey.log}
|
|
@ -1,507 +0,0 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
import yaml
|
||||
|
||||
from packaging.requirements import InvalidRequirement, Requirement
|
||||
|
||||
|
||||
BASE_COLLECTIONS_PATH = '/usr/share/ansible/collections'
|
||||
|
||||
|
||||
# regex for a comment at the start of a line, or embedded with leading space(s)
|
||||
COMMENT_RE = re.compile(r'(?:^|\s+)#.*$')
|
||||
|
||||
|
||||
EXCLUDE_REQUIREMENTS = frozenset((
|
||||
# obviously already satisfied or unwanted
|
||||
'ansible', 'ansible-base', 'python', 'ansible-core',
|
||||
# general python test requirements
|
||||
'tox', 'pycodestyle', 'yamllint', 'pylint',
|
||||
'flake8', 'pytest', 'pytest-xdist', 'coverage', 'mock', 'testinfra',
|
||||
# test requirements highly specific to Ansible testing
|
||||
'ansible-lint', 'molecule', 'galaxy-importer', 'voluptuous',
|
||||
# already present in image for py3 environments
|
||||
'yaml', 'pyyaml', 'json',
|
||||
))
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class CollectionDefinition:
|
||||
"""
|
||||
This class represents the dependency metadata for a collection
|
||||
should be replaced by logic to hit the Galaxy API if made available
|
||||
"""
|
||||
|
||||
def __init__(self, collection_path):
|
||||
self.reference_path = collection_path
|
||||
|
||||
# NOTE: Filenames should match constants.DEAFULT_EE_BASENAME and constants.YAML_FILENAME_EXTENSIONS.
|
||||
meta_file_base = os.path.join(collection_path, 'meta', 'execution-environment')
|
||||
ee_exists = False
|
||||
for ext in ('yml', 'yaml'):
|
||||
meta_file = f"{meta_file_base}.{ext}"
|
||||
if os.path.exists(meta_file):
|
||||
with open(meta_file, 'r') as f:
|
||||
self.raw = yaml.safe_load(f)
|
||||
ee_exists = True
|
||||
break
|
||||
|
||||
if not ee_exists:
|
||||
self.raw = {'version': 1, 'dependencies': {}}
|
||||
# Automatically infer requirements for collection
|
||||
for entry, filename in [('python', 'requirements.txt'), ('system', 'bindep.txt')]:
|
||||
candidate_file = os.path.join(collection_path, filename)
|
||||
if has_content(candidate_file):
|
||||
self.raw['dependencies'][entry] = filename
|
||||
|
||||
def target_dir(self):
|
||||
namespace, name = self.namespace_name()
|
||||
return os.path.join(
|
||||
BASE_COLLECTIONS_PATH, 'ansible_collections',
|
||||
namespace, name
|
||||
)
|
||||
|
||||
def namespace_name(self):
|
||||
"Returns 2-tuple of namespace and name"
|
||||
path_parts = [p for p in self.reference_path.split(os.path.sep) if p]
|
||||
return tuple(path_parts[-2:])
|
||||
|
||||
def get_dependency(self, entry):
|
||||
"""A collection is only allowed to reference a file by a relative path
|
||||
which is relative to the collection root
|
||||
"""
|
||||
req_file = self.raw.get('dependencies', {}).get(entry)
|
||||
if req_file is None:
|
||||
return None
|
||||
if os.path.isabs(req_file):
|
||||
raise RuntimeError(
|
||||
'Collections must specify relative paths for requirements files. '
|
||||
f'The file {req_file} specified by {self.reference_path} violates this.'
|
||||
)
|
||||
|
||||
return req_file
|
||||
|
||||
|
||||
def line_is_empty(line):
|
||||
return bool((not line.strip()) or line.startswith('#'))
|
||||
|
||||
|
||||
def read_req_file(path):
|
||||
"""Provide some minimal error and display handling for file reading"""
|
||||
if not os.path.exists(path):
|
||||
print(f'Expected requirements file not present at: {os.path.abspath(path)}')
|
||||
with open(path, 'r') as f:
|
||||
return f.read()
|
||||
|
||||
|
||||
def pip_file_data(path):
|
||||
pip_content = read_req_file(path)
|
||||
|
||||
pip_lines = []
|
||||
for line in pip_content.split('\n'):
|
||||
if line_is_empty(line):
|
||||
continue
|
||||
if line.startswith('-r') or line.startswith('--requirement'):
|
||||
_, new_filename = line.split(None, 1)
|
||||
new_path = os.path.join(os.path.dirname(path or '.'), new_filename)
|
||||
pip_lines.extend(pip_file_data(new_path))
|
||||
else:
|
||||
pip_lines.append(line)
|
||||
|
||||
return pip_lines
|
||||
|
||||
|
||||
def bindep_file_data(path):
|
||||
sys_content = read_req_file(path)
|
||||
|
||||
sys_lines = []
|
||||
for line in sys_content.split('\n'):
|
||||
if line_is_empty(line):
|
||||
continue
|
||||
sys_lines.append(line)
|
||||
|
||||
return sys_lines
|
||||
|
||||
|
||||
def process_collection(path):
|
||||
"""Return a tuple of (python_dependencies, system_dependencies) for the
|
||||
collection install path given.
|
||||
Both items returned are a list of dependencies.
|
||||
|
||||
:param str path: root directory of collection (this would contain galaxy.yml file)
|
||||
"""
|
||||
col_def = CollectionDefinition(path)
|
||||
|
||||
py_file = col_def.get_dependency('python')
|
||||
pip_lines = []
|
||||
if py_file:
|
||||
pip_lines = pip_file_data(os.path.join(path, py_file))
|
||||
|
||||
sys_file = col_def.get_dependency('system')
|
||||
bindep_lines = []
|
||||
if sys_file:
|
||||
bindep_lines = bindep_file_data(os.path.join(path, sys_file))
|
||||
|
||||
return (pip_lines, bindep_lines)
|
||||
|
||||
|
||||
def process(data_dir=BASE_COLLECTIONS_PATH,
|
||||
user_pip=None,
|
||||
user_bindep=None,
|
||||
exclude_pip=None,
|
||||
exclude_bindep=None,
|
||||
exclude_collections=None):
|
||||
"""
|
||||
Build a dictionary of Python and system requirements from any collections
|
||||
installed in data_dir, and any user specified requirements.
|
||||
|
||||
Excluded requirements, if any, will be inserted into the return dict.
|
||||
|
||||
Example return dict:
|
||||
{
|
||||
'python': {
|
||||
'collection.a': ['abc', 'def'],
|
||||
'collection.b': ['ghi'],
|
||||
'user': ['jkl'],
|
||||
'exclude: ['abc'],
|
||||
},
|
||||
'system': {
|
||||
'collection.a': ['ZYX'],
|
||||
'user': ['WVU'],
|
||||
'exclude': ['ZYX'],
|
||||
},
|
||||
'excluded_collections': [
|
||||
'a.b',
|
||||
]
|
||||
}
|
||||
"""
|
||||
paths = []
|
||||
path_root = os.path.join(data_dir, 'ansible_collections')
|
||||
|
||||
# build a list of all the valid collection paths
|
||||
if os.path.exists(path_root):
|
||||
for namespace in sorted(os.listdir(path_root)):
|
||||
if not os.path.isdir(os.path.join(path_root, namespace)):
|
||||
continue
|
||||
for name in sorted(os.listdir(os.path.join(path_root, namespace))):
|
||||
collection_dir = os.path.join(path_root, namespace, name)
|
||||
if not os.path.isdir(collection_dir):
|
||||
continue
|
||||
files_list = os.listdir(collection_dir)
|
||||
if 'galaxy.yml' in files_list or 'MANIFEST.json' in files_list:
|
||||
paths.append(collection_dir)
|
||||
|
||||
# populate the requirements content
|
||||
py_req = {}
|
||||
sys_req = {}
|
||||
for path in paths:
|
||||
col_pip_lines, col_sys_lines = process_collection(path)
|
||||
col_def = CollectionDefinition(path)
|
||||
namespace, name = col_def.namespace_name()
|
||||
key = f'{namespace}.{name}'
|
||||
|
||||
if col_pip_lines:
|
||||
py_req[key] = col_pip_lines
|
||||
|
||||
if col_sys_lines:
|
||||
sys_req[key] = col_sys_lines
|
||||
|
||||
# add on entries from user files, if they are given
|
||||
if user_pip:
|
||||
col_pip_lines = pip_file_data(user_pip)
|
||||
if col_pip_lines:
|
||||
py_req['user'] = col_pip_lines
|
||||
if exclude_pip:
|
||||
col_pip_exclude_lines = pip_file_data(exclude_pip)
|
||||
if col_pip_exclude_lines:
|
||||
py_req['exclude'] = col_pip_exclude_lines
|
||||
if user_bindep:
|
||||
col_sys_lines = bindep_file_data(user_bindep)
|
||||
if col_sys_lines:
|
||||
sys_req['user'] = col_sys_lines
|
||||
if exclude_bindep:
|
||||
col_sys_exclude_lines = bindep_file_data(exclude_bindep)
|
||||
if col_sys_exclude_lines:
|
||||
sys_req['exclude'] = col_sys_exclude_lines
|
||||
|
||||
retval = {
|
||||
'python': py_req,
|
||||
'system': sys_req,
|
||||
}
|
||||
|
||||
if exclude_collections:
|
||||
# This file should just be a newline separated list of collection names,
|
||||
# so reusing bindep_file_data() to read it should work fine.
|
||||
excluded_collection_list = bindep_file_data(exclude_collections)
|
||||
if excluded_collection_list:
|
||||
retval['excluded_collections'] = excluded_collection_list
|
||||
|
||||
return retval
|
||||
|
||||
|
||||
def has_content(candidate_file):
|
||||
"""Beyond checking that the candidate exists, this also assures
|
||||
that the file has something other than whitespace,
|
||||
which can cause errors when given to pip.
|
||||
"""
|
||||
if not os.path.exists(candidate_file):
|
||||
return False
|
||||
with open(candidate_file, 'r') as f:
|
||||
content = f.read()
|
||||
return bool(content.strip().strip('\n'))
|
||||
|
||||
|
||||
def strip_comments(reqs: dict[str, list]) -> dict[str, list]:
|
||||
"""
|
||||
Filter any comments out of the Python collection requirements input.
|
||||
|
||||
:param dict reqs: A dict of Python requirements, keyed by collection name.
|
||||
|
||||
:return: Same as the input parameter, except with no comment lines.
|
||||
"""
|
||||
result: dict[str, list] = {}
|
||||
for collection, lines in reqs.items():
|
||||
for line in lines:
|
||||
# strip comments
|
||||
if (base_line := COMMENT_RE.sub('', line.strip())):
|
||||
result.setdefault(collection, []).append(base_line)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def should_be_excluded(value: str, exclusion_list: list[str]) -> bool:
|
||||
"""
|
||||
Test if `value` matches against any value in `exclusion_list`.
|
||||
|
||||
The exclusion_list values are either strings to be compared in a case-insensitive
|
||||
manner against value, OR, they are regular expressions to be tested against the
|
||||
value. A regular expression will contain '~' as the first character.
|
||||
|
||||
:return: True if the value should be excluded, False otherwise.
|
||||
"""
|
||||
for exclude_value in exclusion_list:
|
||||
if exclude_value[0] == "~":
|
||||
pattern = exclude_value[1:]
|
||||
if re.fullmatch(pattern.lower(), value.lower()):
|
||||
return True
|
||||
elif exclude_value.lower() == value.lower():
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def filter_requirements(reqs: dict[str, list],
|
||||
exclude: list[str] | None = None,
|
||||
exclude_collections: list[str] | None = None,
|
||||
is_python: bool = True) -> list[str]:
|
||||
"""
|
||||
Given a dictionary of Python requirement lines keyed off collections,
|
||||
return a list of cleaned up (no source comments) requirements
|
||||
annotated with comments indicating the sources based off the collection keys.
|
||||
|
||||
Currently, non-pep508 compliant Python entries are passed through. We also no
|
||||
longer attempt to normalize names (replace '_' with '-', etc), other than
|
||||
lowercasing it for exclusion matching, since we no longer are attempting
|
||||
to combine similar entries.
|
||||
|
||||
:param dict reqs: A dict of either Python or system requirements, keyed by collection name.
|
||||
:param list exclude: A list of requirements to be excluded from the output.
|
||||
:param list exclude_collections: A list of collection names from which to exclude all requirements.
|
||||
:param bool is_python: This should be set to True for Python requirements, as each
|
||||
will be tested for PEP508 compliance. This should be set to False for system requirements.
|
||||
|
||||
:return: A list of filtered and annotated requirements.
|
||||
"""
|
||||
exclusions: list[str] = []
|
||||
collection_ignore_list: list[str] = []
|
||||
|
||||
if exclude:
|
||||
exclusions = exclude.copy()
|
||||
if exclude_collections:
|
||||
collection_ignore_list = exclude_collections.copy()
|
||||
|
||||
annotated_lines: list[str] = []
|
||||
uncommented_reqs = strip_comments(reqs)
|
||||
|
||||
for collection, lines in uncommented_reqs.items():
|
||||
# Bypass this collection if we've been told to ignore all requirements from it.
|
||||
if should_be_excluded(collection, collection_ignore_list):
|
||||
logger.debug("# Excluding all requirements from collection '%s'", collection)
|
||||
continue
|
||||
|
||||
for line in lines:
|
||||
# Determine the simple name based on type of requirement
|
||||
if is_python:
|
||||
try:
|
||||
parsed_req = Requirement(line)
|
||||
name = parsed_req.name
|
||||
except InvalidRequirement:
|
||||
logger.warning(
|
||||
"Passing through non-PEP508 compliant line '%s' from collection '%s'",
|
||||
line, collection
|
||||
)
|
||||
annotated_lines.append(line) # We intentionally won't annotate these lines (multi-line?)
|
||||
continue
|
||||
else:
|
||||
# bindep system requirements have the package name as the first "word" on the line
|
||||
name = line.split(maxsplit=1)[0]
|
||||
|
||||
if collection.lower() not in {'user', 'exclude'}:
|
||||
lower_name = name.lower()
|
||||
|
||||
if lower_name in EXCLUDE_REQUIREMENTS:
|
||||
logger.debug("# Excluding requirement '%s' from '%s'", name, collection)
|
||||
continue
|
||||
|
||||
if should_be_excluded(lower_name, exclusions):
|
||||
logger.debug("# Explicitly excluding requirement '%s' from '%s'", name, collection)
|
||||
continue
|
||||
|
||||
annotated_lines.append(f'{line} # from collection {collection}')
|
||||
|
||||
return annotated_lines
|
||||
|
||||
|
||||
def parse_args(args=None):
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
prog='introspect',
|
||||
description=(
|
||||
'ansible-builder introspection; injected and used during execution environment build'
|
||||
)
|
||||
)
|
||||
|
||||
subparsers = parser.add_subparsers(
|
||||
help='The command to invoke.',
|
||||
dest='action',
|
||||
required=True,
|
||||
)
|
||||
|
||||
create_introspect_parser(subparsers)
|
||||
|
||||
return parser.parse_args(args)
|
||||
|
||||
|
||||
def run_introspect(args, log):
|
||||
data = process(args.folder,
|
||||
user_pip=args.user_pip,
|
||||
user_bindep=args.user_bindep,
|
||||
exclude_pip=args.exclude_pip,
|
||||
exclude_bindep=args.exclude_bindep,
|
||||
exclude_collections=args.exclude_collections)
|
||||
log.info('# Dependency data for %s', args.folder)
|
||||
|
||||
excluded_collections = data.pop('excluded_collections', None)
|
||||
|
||||
data['python'] = filter_requirements(
|
||||
data['python'],
|
||||
exclude=data['python'].pop('exclude', []),
|
||||
exclude_collections=excluded_collections,
|
||||
)
|
||||
|
||||
data['system'] = filter_requirements(
|
||||
data['system'],
|
||||
exclude=data['system'].pop('exclude', []),
|
||||
exclude_collections=excluded_collections,
|
||||
is_python=False
|
||||
)
|
||||
|
||||
print('---')
|
||||
print(yaml.dump(data, default_flow_style=False))
|
||||
|
||||
if args.write_pip and data.get('python'):
|
||||
write_file(args.write_pip, data.get('python') + [''])
|
||||
if args.write_bindep and data.get('system'):
|
||||
write_file(args.write_bindep, data.get('system') + [''])
|
||||
|
||||
sys.exit(0)
|
||||
|
||||
|
||||
def create_introspect_parser(parser):
|
||||
introspect_parser = parser.add_parser(
|
||||
'introspect',
|
||||
help='Introspects collections in folder.',
|
||||
description=(
|
||||
'Loops over collections in folder and returns data about dependencies. '
|
||||
'This is used internally and exposed here for verification. '
|
||||
'This is targeted toward collection authors and maintainers.'
|
||||
)
|
||||
)
|
||||
introspect_parser.add_argument('--sanitize', action='store_true',
|
||||
help=argparse.SUPPRESS)
|
||||
|
||||
introspect_parser.add_argument(
|
||||
'folder', default=BASE_COLLECTIONS_PATH, nargs='?',
|
||||
help=(
|
||||
'Ansible collections path(s) to introspect. '
|
||||
'This should have a folder named ansible_collections inside of it.'
|
||||
)
|
||||
)
|
||||
|
||||
introspect_parser.add_argument(
|
||||
'--user-pip', dest='user_pip',
|
||||
help='An additional file to combine with collection pip requirements.'
|
||||
)
|
||||
introspect_parser.add_argument(
|
||||
'--user-bindep', dest='user_bindep',
|
||||
help='An additional file to combine with collection bindep requirements.'
|
||||
)
|
||||
introspect_parser.add_argument(
|
||||
'--exclude-bindep-reqs', dest='exclude_bindep',
|
||||
help='An additional file to exclude specific bindep requirements from collections.'
|
||||
)
|
||||
introspect_parser.add_argument(
|
||||
'--exclude-pip-reqs', dest='exclude_pip',
|
||||
help='An additional file to exclude specific pip requirements from collections.'
|
||||
)
|
||||
introspect_parser.add_argument(
|
||||
'--exclude-collection-reqs', dest='exclude_collections',
|
||||
help='An additional file to exclude all requirements from the listed collections.'
|
||||
)
|
||||
introspect_parser.add_argument(
|
||||
'--write-pip', dest='write_pip',
|
||||
help='Write the combined pip requirements file to this location.'
|
||||
)
|
||||
introspect_parser.add_argument(
|
||||
'--write-bindep', dest='write_bindep',
|
||||
help='Write the combined bindep requirements file to this location.'
|
||||
)
|
||||
|
||||
return introspect_parser
|
||||
|
||||
|
||||
def write_file(filename: str, lines: list) -> bool:
|
||||
parent_dir = os.path.dirname(filename)
|
||||
if parent_dir and not os.path.exists(parent_dir):
|
||||
logger.warning('Creating parent directory for %s', filename)
|
||||
os.makedirs(parent_dir)
|
||||
new_text = '\n'.join(lines)
|
||||
if os.path.exists(filename):
|
||||
with open(filename, 'r') as f:
|
||||
if f.read() == new_text:
|
||||
logger.debug("File %s is already up-to-date.", filename)
|
||||
return False
|
||||
logger.warning('File %s had modifications and will be rewritten', filename)
|
||||
with open(filename, 'w') as f:
|
||||
f.write(new_text)
|
||||
return True
|
||||
|
||||
|
||||
def main():
|
||||
args = parse_args()
|
||||
|
||||
if args.action == 'introspect':
|
||||
run_introspect(args, logger)
|
||||
|
||||
logger.error("An error has occurred.")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
|
@ -1,56 +0,0 @@
|
|||
#!/bin/bash
|
||||
# Copyright (c) 2024 Red Hat, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
#####################################################################
|
||||
# Script to encapsulate pip installation.
|
||||
#
|
||||
# Usage: pip_install <PYCMD>
|
||||
#
|
||||
# Options:
|
||||
# PYCMD - The path to the python executable to use.
|
||||
#####################################################################
|
||||
|
||||
set -x
|
||||
|
||||
PYCMD=$1
|
||||
|
||||
if [ -z "$PYCMD" ]
|
||||
then
|
||||
echo "Usage: pip_install <PYCMD>"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ ! -x "$PYCMD" ]
|
||||
then
|
||||
echo "$PYCMD is not an executable"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# This is going to be our default functionality for now. This will likely
|
||||
# need to change if we add support for non-RHEL distros.
|
||||
$PYCMD -m ensurepip --root /
|
||||
|
||||
if [ $? -ne 0 ]
|
||||
then
|
||||
cat<<EOF
|
||||
**********************************************************************
|
||||
ERROR - pip installation failed for Python $PYCMD
|
||||
**********************************************************************
|
||||
EOF
|
||||
exit 1
|
||||
fi
|
||||
|
||||
exit 0
|
|
@ -1,33 +0,0 @@
|
|||
version: 3
|
||||
images:
|
||||
base_image:
|
||||
name: quay.io/centos/centos:stream9
|
||||
dependencies:
|
||||
python: requirements.txt
|
||||
galaxy: collections/requirements.yml
|
||||
python_interpreter:
|
||||
package_system: python3.12
|
||||
python_path: /usr/bin/python3.12
|
||||
ansible_core:
|
||||
package_pip: ansible-core>=2.17.0
|
||||
ansible_runner:
|
||||
package_pip: ansible-runner==2.4.0
|
||||
system: |
|
||||
git-core [platform:rpm]
|
||||
python3.11-devel [platform:rpm compile]
|
||||
libcurl-devel [platform:rpm compile]
|
||||
krb5-devel [platform:rpm compile]
|
||||
krb5-workstation [platform:rpm]
|
||||
subversion [platform:rpm]
|
||||
subversion [platform:dpkg]
|
||||
git-lfs [platform:rpm]
|
||||
sshpass [platform:rpm]
|
||||
rsync [platform:rpm]
|
||||
epel-release [platform:rpm]
|
||||
unzip [platform:rpm]
|
||||
podman-remote [platform:rpm]
|
||||
cmake [platform:rpm compile]
|
||||
gcc [platform:rpm compile]
|
||||
gcc-c++ [platform:rpm compile]
|
||||
make [platform:rpm compile]
|
||||
openssl-devel [platform:rpm compile]
|
|
@ -0,0 +1,31 @@
|
|||
[all:vars]
|
||||
host_domain=core.dc.verdnatura.es
|
||||
|
||||
[backup:vars]
|
||||
host_domain=backup.dc.verdnatura.es
|
||||
|
||||
[ceph]
|
||||
ceph[1:3]
|
||||
|
||||
[ceph_gw]
|
||||
ceph-gw[1:2]
|
||||
|
||||
[pve]
|
||||
pve[01:05]
|
||||
|
||||
[infra:children]
|
||||
ceph
|
||||
ceph_gw
|
||||
pve
|
||||
|
||||
[core]
|
||||
core-agent
|
||||
core-proxy
|
||||
|
||||
[backup]
|
||||
bacula-dir
|
||||
bacula-db
|
||||
bacularis
|
||||
backup-nas
|
||||
tftp
|
||||
kube-backup
|
|
@ -1,23 +1,22 @@
|
|||
hostname_fqdn: "{{inventory_hostname_short}}.{{host_domain}}"
|
||||
ansible_host: "{{hostname_fqdn}}"
|
||||
passbolt: 'anatomicjc.passbolt.passbolt'
|
||||
passbolt_inventory: 'anatomicjc.passbolt.passbolt_inventory'
|
||||
sysadmin_mail: sysadmin@domain.local
|
||||
ansible_host: "{{inventory_hostname_short}}.{{host_domain}}"
|
||||
sysadmin_mail: sysadmin@verdnatura.es
|
||||
sysadmin_group: sysadmin
|
||||
smtp_server: smtp.domain.local
|
||||
homes_server: homes.domain.local
|
||||
nagios_server: nagios.domain.local
|
||||
time_server: time1.domain.local time2.domain.local
|
||||
main_dns_server: ns1.domain.local
|
||||
ldap_uri: ldap://ldap.domain.local
|
||||
ldap_base: dc=domain,dc=local
|
||||
smtp_server: smtp.verdnatura.es
|
||||
homes_server: homes.servers.dc.verdnatura.es
|
||||
nagios_server: nagios.verdnatura.es
|
||||
time_server: time1.verdnatura.es time2.verdnatura.es
|
||||
main_dns_server: ns1.verdnatura.es
|
||||
ldap_uri: ldap://ldap.verdnatura.es
|
||||
ldap_base: dc=verdnatura,dc=es
|
||||
dc_net: "10.0.0.0/16"
|
||||
resolv:
|
||||
domain: verdnatura.es
|
||||
search: verdnatura.es
|
||||
resolvers:
|
||||
- '8.8.8.8'
|
||||
- '8.8.4.4'
|
||||
awx_email: awx@domain.local
|
||||
- '10.0.0.4'
|
||||
- '10.0.0.5'
|
||||
awx_email: awx@verdnatura.es
|
||||
awx_pub_key: >
|
||||
ssh-ed25519
|
||||
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
|
||||
awx@domain.local
|
||||
passbolt_folder: 00000000-0000-0000-0000-000000000000
|
||||
AAAAC3NzaC1lZDI1NTE5AAAAIKzAwWm+IsqZCgMzjdZ7Do3xWtVtoUCpWJpH7KSi2a/H
|
||||
awx@verdnatura.es
|
||||
|
|
|
@ -0,0 +1,38 @@
|
|||
[all:vars]
|
||||
host_domain=lab.verdnatura.es
|
||||
|
||||
[cephlab]
|
||||
cephlab[01:03]
|
||||
|
||||
[pvelab]
|
||||
pvelab[01:03]
|
||||
|
||||
[infra:children]
|
||||
cephlab
|
||||
pvelab
|
||||
|
||||
[cephtest]
|
||||
cephtest[01:03]
|
||||
|
||||
[kubepre]
|
||||
kubepre-helm
|
||||
kubepre-proxy1
|
||||
kubepre-master[1:3]
|
||||
kubepre-worker[1:4]
|
||||
|
||||
[kubetest]
|
||||
kubetest-helm
|
||||
kubetest-master[01:03]
|
||||
kubetest-worker[01:04]
|
||||
|
||||
[laboratory]
|
||||
corelab-proxy1
|
||||
zammad
|
||||
matrix
|
||||
influxdb2
|
||||
|
||||
[guest:children]
|
||||
cephtest
|
||||
kubepre
|
||||
kubetest
|
||||
laboratory
|
|
@ -0,0 +1,81 @@
|
|||
[all:vars]
|
||||
host_domain=servers.dc.verdnatura.es
|
||||
|
||||
[kube_master]
|
||||
kube-master[1:5]
|
||||
|
||||
[kube_worker]
|
||||
kube-worker[1:5]
|
||||
|
||||
[kube_proxy]
|
||||
kube-proxy[1:2]
|
||||
|
||||
[kube_helper]
|
||||
kube-helm
|
||||
|
||||
[kubernetes:children]
|
||||
kube_master
|
||||
kube_worker
|
||||
kube_proxy
|
||||
kube_helper
|
||||
|
||||
[ad]
|
||||
dc[1:2]
|
||||
server
|
||||
|
||||
[db]
|
||||
db-proxy[1:2]
|
||||
db[1:2]
|
||||
|
||||
[ldap]
|
||||
ldap-proxy[1:2]
|
||||
ldap[1:3]
|
||||
|
||||
[mail]
|
||||
dovecot
|
||||
mailgw[1:2]
|
||||
postfix
|
||||
spamd
|
||||
spamd-db
|
||||
|
||||
[monitoring]
|
||||
cacti
|
||||
logger
|
||||
nagios
|
||||
nagiosql-db
|
||||
librenms
|
||||
|
||||
[network]
|
||||
dhcp[1:2]
|
||||
ns[1:2]
|
||||
unifi
|
||||
vpn
|
||||
time[1:2]
|
||||
|
||||
[princ]
|
||||
pbx
|
||||
homes
|
||||
doku
|
||||
iventoy
|
||||
|
||||
[rds]
|
||||
ts-proxy[1:2]
|
||||
profiles
|
||||
|
||||
[test]
|
||||
test-db1
|
||||
test-db-proxy[1:2]
|
||||
monthly-db
|
||||
dev-db
|
||||
|
||||
[guest:children]
|
||||
ad
|
||||
db
|
||||
kubernetes
|
||||
ldap
|
||||
mail
|
||||
monitoring
|
||||
network
|
||||
princ
|
||||
rds
|
||||
test
|
|
@ -1,20 +0,0 @@
|
|||
[all:vars]
|
||||
host_domain=domain.local
|
||||
|
||||
[pve:vars]
|
||||
host_domain=core.domain.local
|
||||
|
||||
[ceph]
|
||||
ceph[1:3]
|
||||
|
||||
[pve]
|
||||
pve[1:5]
|
||||
|
||||
[infra:children]
|
||||
ceph
|
||||
pve
|
||||
|
||||
[servers]
|
||||
server1 ansible_host=10.0.0.1
|
||||
server1 ansible_host=10.0.0.2
|
||||
server3 ansible_host=10.0.0.3
|
|
@ -1,11 +1,8 @@
|
|||
- name: Configure base Debian host
|
||||
hosts: all
|
||||
vars_files: ../vault.yml
|
||||
tasks:
|
||||
- name: Configure virtual machine or host (not LXC)
|
||||
import_role:
|
||||
name: debian-host
|
||||
when: ansible_virtualization_role == 'host' or ansible_virtualization_type == 'kvm'
|
||||
- name: Configure base system (all)
|
||||
- name: Configure base system
|
||||
import_role:
|
||||
name: debian-base
|
||||
- name: Configure guest
|
||||
|
@ -15,4 +12,4 @@
|
|||
- name: Configure virtual machine
|
||||
import_role:
|
||||
name: debian-qemu
|
||||
when: ansible_virtualization_type == 'kvm'
|
||||
when: ansible_virtualization_role == 'guest' and ansible_virtualization_type == 'kvm'
|
||||
|
|
|
@ -1,12 +1,10 @@
|
|||
- name: Fetch or create passbolt password
|
||||
- name: Fetch passbolt password
|
||||
hosts: all
|
||||
gather_facts: no
|
||||
tasks:
|
||||
- debug:
|
||||
msg: "{{ lookup(passbolt, 'test', password=passbolt_password) }}"
|
||||
vars:
|
||||
passbolt_password: 'S3cR3tP4$$w0rd'
|
||||
environment:
|
||||
PASSBOLT_CREATE_NEW_RESOURCE: true
|
||||
PASSBOLT_NEW_RESOURCE_PASSWORD_LENGTH: 18
|
||||
PASSBOLT_NEW_RESOURCE_PASSWORD_SPECIAL_CHARS: false
|
||||
- name: Print password
|
||||
debug:
|
||||
msg: "Variable: {{ lookup(passbolt, 'test') }}"
|
||||
vars:
|
||||
passbolt: 'anatomicjc.passbolt.passbolt'
|
||||
passbolt_inventory: 'anatomicjc.passbolt.passbolt_inventory'
|
||||
|
|
|
@ -1,3 +1,3 @@
|
|||
py-passbolt==0.0.18
|
||||
cryptography==3.3.2
|
||||
passlib==1.7.4
|
||||
ansible==2.1.0
|
||||
|
|
|
@ -1,32 +1,7 @@
|
|||
vn_first_time: false
|
||||
vn_witness_checked: false
|
||||
grub_user: admin
|
||||
default_user: user
|
||||
root_password: Pa$$w0rd
|
||||
fail2ban:
|
||||
email: "{{ sysadmin_mail }}"
|
||||
bantime: 600
|
||||
maxretry: 4
|
||||
ignore: "127.0.0.0/8 {{ dc_net }}"
|
||||
logpath: "/var/log/auth.log"
|
||||
fail2ban_base_packages:
|
||||
- fail2ban
|
||||
- rsyslog
|
||||
time_server_spain: ntp.roa.es
|
||||
nagios_packages:
|
||||
- nagios-nrpe-server
|
||||
- nagios-plugins-contrib
|
||||
- monitoring-plugins-basic
|
||||
base_packages:
|
||||
- htop
|
||||
- psmisc
|
||||
- bash-completion
|
||||
- screen
|
||||
- aptitude
|
||||
- tree
|
||||
- btop
|
||||
- ncdu
|
||||
- debconf-utils
|
||||
- net-tools
|
||||
locales_present:
|
||||
- en_US.UTF-8
|
||||
- es_ES.UTF-8
|
||||
|
|
|
@ -0,0 +1,8 @@
|
|||
#!/bin/bash
|
||||
|
||||
echo 'tzdata tzdata/Areas select Europe' | debconf-set-selections
|
||||
echo 'tzdata tzdata/Zones/Europe select Madrid' | debconf-set-selections
|
||||
echo 'tzdata tzdata/Zones/Etc select UTC' | debconf-set-selections
|
||||
rm /etc/timezone
|
||||
rm /etc/localtime
|
||||
dpkg-reconfigure -f noninteractive tzdata
|
|
@ -1,26 +1,21 @@
|
|||
- name: restart systemd-timesyncd
|
||||
systemd:
|
||||
- name: restart-timesyncd
|
||||
service:
|
||||
name: systemd-timesyncd
|
||||
state: restarted
|
||||
- name: restart-exim
|
||||
service:
|
||||
name: exim4
|
||||
state: restarted
|
||||
- name: restart-ssh
|
||||
systemd:
|
||||
service:
|
||||
name: ssh
|
||||
state: restarted
|
||||
- name: restart fail2ban
|
||||
systemd:
|
||||
- name: restart-fail2ban
|
||||
service:
|
||||
name: fail2ban
|
||||
state: restarted
|
||||
- name: restart-nrpe
|
||||
systemd:
|
||||
service:
|
||||
name: nagios-nrpe-server
|
||||
state: restarted
|
||||
- name: restart sshd
|
||||
systemd:
|
||||
name: sshd
|
||||
state: restarted
|
||||
- name: generate locales
|
||||
command: /usr/sbin/locale-gen
|
||||
- name: reconfigure tzdata
|
||||
command: dpkg-reconfigure -f noninteractive tzdata
|
||||
- name: update exim configuration
|
||||
command: /usr/sbin/update-exim4.conf
|
||||
|
||||
|
|
|
@ -2,52 +2,19 @@
|
|||
apt:
|
||||
name: bacula-fd
|
||||
state: present
|
||||
- name: Read content file in base64
|
||||
- name: Load Bacula default passwords
|
||||
slurp:
|
||||
src: /etc/bacula/common_default_passwords
|
||||
register: file_content
|
||||
- name: Going to text plane
|
||||
no_log: true
|
||||
set_fact:
|
||||
file_content_decoded: "{{ file_content.content | b64decode }}"
|
||||
- name: Extracting passwords
|
||||
no_log: true
|
||||
set_fact:
|
||||
passwords: "{{ file_content_decoded.splitlines() | select('match', '^[^#]') | map('regex_replace', '^([^=]+)=(.+)$', '\\1:\\2') | list }}"
|
||||
- name: Initialize password dictionary
|
||||
set_fact:
|
||||
bacula_passwords: {}
|
||||
- name: Convert lines to individual variables generating a new dict
|
||||
no_log: true
|
||||
set_fact:
|
||||
bacula_passwords: "{{ bacula_passwords | combine({item.split(':')[0].lower(): item.split(':')[1] | regex_replace('\\n$', '') }) }}"
|
||||
loop: "{{ passwords }}"
|
||||
when: "'FDPASSWD' in item or 'FDMPASSWD' in item"
|
||||
register: bacula_passwords
|
||||
- name: Configure Bacula FD
|
||||
template:
|
||||
src: bacula-fd.conf
|
||||
dest: /etc/bacula/bacula-fd.conf
|
||||
owner: root
|
||||
group: bacula
|
||||
mode: u=rw,g=r,o=
|
||||
mode: '0640'
|
||||
backup: true
|
||||
register: bacula_config
|
||||
- name: Configure master cert
|
||||
copy:
|
||||
content: "{{ master_cert_content }}"
|
||||
dest: /etc/bacula/master-cert.pem
|
||||
owner: root
|
||||
group: root
|
||||
mode: u=rw,g=r,o=r
|
||||
- name: Configure master cert
|
||||
copy:
|
||||
content: "{{ lookup(passbolt, 'fd-cert.pem', folder_parent_id=passbolt_folder).description }}"
|
||||
dest: /etc/bacula/fd-cert.pem
|
||||
owner: root
|
||||
group: bacula
|
||||
mode: u=rw,g=r,o=
|
||||
- name: Restart Bacula FD service
|
||||
service:
|
||||
name: bacula-fd
|
||||
state: restarted
|
||||
when: bacula_config.changed
|
||||
|
|
|
@ -1,5 +0,0 @@
|
|||
- name: Delete default user
|
||||
user:
|
||||
name: "{{ default_user }}"
|
||||
state: absent
|
||||
remove: yes
|
|
@ -1,32 +1,15 @@
|
|||
- name: Install fail2ban and rsyslog packages
|
||||
- name: Install fail2ban packages
|
||||
apt:
|
||||
name: "{{ fail2ban_base_packages }}"
|
||||
name: fail2ban
|
||||
state: present
|
||||
- name: Configure sshd_config settings
|
||||
copy:
|
||||
dest: /etc/ssh/sshd_config.d/vn-fail2ban.conf
|
||||
content: |
|
||||
# Do not edit this file! Ansible will overwrite it.
|
||||
|
||||
SyslogFacility AUTH
|
||||
owner: root
|
||||
group: root
|
||||
mode: u=rw,g=r,o=r
|
||||
notify: restart sshd
|
||||
loop:
|
||||
- fail2ban
|
||||
- rsyslog
|
||||
- name: Configure fail2ban service
|
||||
template:
|
||||
src: jail.local
|
||||
dest: /etc/fail2ban/jail.local
|
||||
owner: root
|
||||
group: root
|
||||
mode: u=rw,g=r,o=r
|
||||
notify: restart fail2ban
|
||||
register: jail
|
||||
- name: Ensure file for auth sshd custom log exists
|
||||
file:
|
||||
path: /var/log/auth.log
|
||||
state: touch
|
||||
owner: root
|
||||
group: adm
|
||||
mode: u=rw,g=r,o=
|
||||
when: jail.changed
|
||||
mode: '0644'
|
||||
notify: restart-fail2ban
|
||||
|
|
|
@ -1,49 +0,0 @@
|
|||
# Enabled password protection to restrict GRUB editing only, leaving menu entries accessible without authentication.
|
||||
# Added the --unrestricted option to the custom 09_make_OS_entries_unrestricted template.
|
||||
# Official GRUB Manual: https://www.gnu.org/software/grub/manual/grub/html_node/Authentication-and-authorisation.html
|
||||
# Additional guidance: http://daniel-lange.com/archives/75-Securing-the-grub-boot-loader.html
|
||||
# Discussion and troubleshooting: https://wiki.archlinux.org/title/Talk:GRUB/Tips_and_tricks
|
||||
# To generate a GRUB password, use the command syntax provided by grub-mkpasswd-pbkdf2 --help.
|
||||
- name: GRUB edit unrestricted option
|
||||
copy:
|
||||
content: |
|
||||
#!/bin/sh
|
||||
exec tail -n +3 $0
|
||||
# This file provides an easy way to add custom menu entries. Simply type the
|
||||
# menu entries you want to add after this comment. Be careful not to change
|
||||
# the 'exec tail' line above.
|
||||
menuentry_id_option="--unrestricted $menuentry_id_option"
|
||||
dest: /etc/grub.d/09_make_OS_entries_unrestricted
|
||||
owner: root
|
||||
group: root
|
||||
checksum: fed5c365f11a919b857b78207565cf341b86082b
|
||||
mode: u=rwx,g=rx,o=rx
|
||||
register: grubunrestricted
|
||||
- name: Search grub password in Passbolt
|
||||
no_log: true
|
||||
set_fact:
|
||||
grub_code: "{{ lookup(passbolt, 'grub', folder_parent_id=passbolt_folder).description }}"
|
||||
- name: GRUB edit password protection
|
||||
copy:
|
||||
content: |
|
||||
#!/bin/sh
|
||||
exec tail -n +3 $0
|
||||
set superusers="{{ grub_user }}"
|
||||
password_pbkdf2 {{ grub_user }} {{ grub_code }}
|
||||
dest: /etc/grub.d/00_before
|
||||
owner: root
|
||||
group: root
|
||||
mode: u=rwx,g=rx,o=rx
|
||||
register: grubpass
|
||||
- name: Change GRUB_TIMEOUT from 5 to 1
|
||||
copy:
|
||||
content: |
|
||||
GRUB_TIMEOUT=1
|
||||
dest: /etc/default/grub.d/timeout.cfg
|
||||
owner: root
|
||||
group: root
|
||||
mode: u=rw,g=r,o=r
|
||||
register: grubtime
|
||||
- name: Generate GRUB configuration
|
||||
command: update-grub
|
||||
when: grubunrestricted.changed or grubpass.changed or grubtime.changed
|
|
@ -1,4 +1,10 @@
|
|||
- name: Install base packages
|
||||
apt:
|
||||
name: "{{ base_packages }}"
|
||||
name: "{{ item }}"
|
||||
state: present
|
||||
with_items:
|
||||
- htop
|
||||
- psmisc
|
||||
- bash-completion
|
||||
- screen
|
||||
- aptitude
|
|
@ -1,6 +1,15 @@
|
|||
- name: make sure locales in variable are generated
|
||||
locale_gen:
|
||||
name: "{{ item }}"
|
||||
- name: Enable locale languages
|
||||
lineinfile:
|
||||
dest: /etc/locale.gen
|
||||
regexp: "{{item.regexp}}"
|
||||
line: "{{item.line}}"
|
||||
state: present
|
||||
with_items: "{{ locales_present }}"
|
||||
notify: generate locales
|
||||
with_items:
|
||||
- regexp: "^# es_ES.UTF-8 UTF-8"
|
||||
line: "es_ES.UTF-8 UTF-8"
|
||||
- regexp: "^# en_US.UTF-8 UTF-8"
|
||||
line: "en_US.UTF-8 UTF-8"
|
||||
- name: Generate locale
|
||||
command: locale-gen
|
||||
- name: Update locale
|
||||
command: update-locale LANG=en_US.UTF-8
|
||||
|
|
|
@ -1,15 +1,3 @@
|
|||
- import_tasks: witness.yml
|
||||
tags: witness
|
||||
- import_tasks: root.yml
|
||||
tags: root
|
||||
- import_tasks: resolv.yml
|
||||
tags: resolv
|
||||
- import_tasks: timesync.yml
|
||||
tags: timesync
|
||||
- import_tasks: sshd_configure.yml
|
||||
tags: sshd_configure
|
||||
- import_tasks: defuser.yml
|
||||
tags: defuser
|
||||
- import_tasks: install.yml
|
||||
tags: install
|
||||
- import_tasks: locale.yml
|
||||
|
@ -26,11 +14,3 @@
|
|||
tags: vim
|
||||
- import_tasks: nrpe.yml
|
||||
tags: nrpe
|
||||
- import_tasks: fail2ban.yml
|
||||
tags: fail2ban
|
||||
- import_tasks: bacula.yml
|
||||
tags: bacula
|
||||
- import_tasks: vn-repo.yml
|
||||
tags: vn-repo
|
||||
- import_tasks: grub_startup.yml
|
||||
tags: grub_startup
|
||||
|
|
|
@ -2,6 +2,6 @@
|
|||
copy:
|
||||
src: motd
|
||||
dest: /etc/update-motd.d/90-vn
|
||||
mode: u=rwx,g=rx,o=rx
|
||||
mode: '755'
|
||||
owner: root
|
||||
group: root
|
||||
|
|
|
@ -1,8 +1,10 @@
|
|||
- name: Install NRPE packages
|
||||
apt:
|
||||
name: "{{ nagios_packages }}"
|
||||
name: "{{ item }}"
|
||||
state: present
|
||||
install_recommends: no
|
||||
loop:
|
||||
- nagios-nrpe-server
|
||||
- nagios-plugins-contrib
|
||||
- name: Set NRPE generic configuration
|
||||
template:
|
||||
src: nrpe.cfg
|
||||
|
|
|
@ -2,6 +2,6 @@
|
|||
copy:
|
||||
src: profile.sh
|
||||
dest: /etc/profile.d/vn.sh
|
||||
mode: u=rw,g=r,o=r
|
||||
mode: '644'
|
||||
owner: root
|
||||
group: root
|
||||
|
|
|
@ -3,27 +3,46 @@
|
|||
name: exim4
|
||||
state: present
|
||||
- name: Prepare exim configuration
|
||||
blockinfile:
|
||||
path: /etc/exim4/update-exim4.conf.conf
|
||||
marker_begin: '--- BEGIN VN ---'
|
||||
marker_end: '--- END VN ---'
|
||||
marker: "# {mark}"
|
||||
block: |
|
||||
dc_eximconfig_configtype='satellite'
|
||||
dc_other_hostnames='{{ ansible_fqdn }}'
|
||||
dc_local_interfaces='127.0.0.1'
|
||||
dc_readhost='{{ ansible_fqdn }}'
|
||||
dc_smarthost='{{ smtp_server }}'
|
||||
dc_hide_mailname='true'
|
||||
lineinfile:
|
||||
dest: /etc/exim4/update-exim4.conf.conf
|
||||
regexp: "{{ item.regexp }}"
|
||||
line: "{{ item.line }}"
|
||||
state: present
|
||||
create: yes
|
||||
mode: u=rw,g=r,o=r
|
||||
notify: update exim configuration
|
||||
mode: 0644
|
||||
with_items:
|
||||
- regexp: '^dc_eximconfig_configtype'
|
||||
line: "dc_eximconfig_configtype='satellite'"
|
||||
- regexp: '^dc_other_hostnames'
|
||||
line: "dc_other_hostnames='{{ ansible_fqdn }}'"
|
||||
- regexp: '^dc_local_interfaces'
|
||||
line: "dc_local_interfaces='127.0.0.1'"
|
||||
- regexp: '^dc_readhost'
|
||||
line: "dc_readhost='{{ ansible_fqdn }}'"
|
||||
- regexp: '^dc_relay_domains'
|
||||
line: "dc_relay_domains=''"
|
||||
- regexp: '^dc_minimaldns'
|
||||
line: "dc_minimaldns='false'"
|
||||
- regexp: '^dc_relay_nets'
|
||||
line: "dc_relay_nets=''"
|
||||
- regexp: '^dc_smarthost'
|
||||
line: "dc_smarthost='{{ smtp_server }}'"
|
||||
- regexp: '^CFILEMODE'
|
||||
line: "CFILEMODE='644'"
|
||||
- regexp: '^dc_use_split_config'
|
||||
line: "dc_use_split_config='false'"
|
||||
- regexp: '^dc_hide_mailname'
|
||||
line: "dc_hide_mailname='true'"
|
||||
- regexp: '^dc_mailname_in_oh'
|
||||
line: "dc_mailname_in_oh='true'"
|
||||
- regexp: '^dc_localdelivery'
|
||||
line: "dc_localdelivery='mail_spool'"
|
||||
notify: restart-exim
|
||||
register: exim_config
|
||||
- name: Force execution of handlers immediately
|
||||
meta: flush_handlers
|
||||
- name: Update exim configuration
|
||||
command: update-exim4.conf
|
||||
when: exim_config.changed
|
||||
- name: Sending mail to verify relay host configuration works
|
||||
shell: >
|
||||
sleep 2; echo "If you see this message, relayhost on {{ ansible_fqdn }} has been configured correctly." \
|
||||
echo "If you see this message, relayhost on {{ ansible_fqdn }} has been configured correctly." \
|
||||
| mailx -s "Relayhost test for {{ ansible_fqdn }}" "{{ sysadmin_mail }}"
|
||||
when: exim_config.changed
|
||||
|
|
|
@ -1,22 +0,0 @@
|
|||
- name: Check if DNS is already configured
|
||||
stat:
|
||||
path: /etc/resolv.conf
|
||||
register: resolv_conf
|
||||
- name: Read /etc/resolv.conf
|
||||
slurp:
|
||||
path: /etc/resolv.conf
|
||||
register: resolv_conf_content
|
||||
when: resolv_conf.stat.exists
|
||||
- name: Check if DNS servers are already present
|
||||
set_fact:
|
||||
dns_configured: "{{ resolv_conf_content['content'] | b64decode | regex_search('^nameserver') is not none }}"
|
||||
when: resolv_conf.stat.exists
|
||||
- name: Apply resolv.conf template only if DNS is not configured
|
||||
template:
|
||||
src: templates/resolv.conf
|
||||
dest: /etc/resolv.conf
|
||||
owner: root
|
||||
group: root
|
||||
mode: u=rw,g=r,o=r
|
||||
backup: true
|
||||
when: not resolv_conf.stat.exists or not dns_configured
|
|
@ -0,0 +1,9 @@
|
|||
- name: Delete default user
|
||||
user:
|
||||
name: "{{ default_user }}"
|
||||
state: absent
|
||||
remove: yes
|
||||
- name: Change root password
|
||||
user:
|
||||
name: root
|
||||
password: "{{ root_password | password_hash('sha512') }}"
|
|
@ -1,43 +0,0 @@
|
|||
- name: Set the root password changed witness variable
|
||||
set_fact:
|
||||
root_pass_changed: "{{ vn_ini.witness.root_pass_changed | default(false) }}"
|
||||
- when: vn_witness_checked and not root_pass_changed
|
||||
no_log: true
|
||||
block:
|
||||
- name: Search root password in Passbolt
|
||||
ignore_errors: true
|
||||
set_fact:
|
||||
passbolt_password: >
|
||||
{{
|
||||
lookup(passbolt, inventory_hostname_short,
|
||||
username='root',
|
||||
uri='ssh://'+hostname_fqdn
|
||||
)
|
||||
}}
|
||||
- when: passbolt_password is not defined
|
||||
block:
|
||||
- name: Generate a random root password
|
||||
set_fact:
|
||||
root_password: "{{ lookup('password', '/dev/null length=18 chars=ascii_letters,digits') }}"
|
||||
- name: Save root password into Passbolt
|
||||
set_fact:
|
||||
msg: >
|
||||
{{
|
||||
lookup(passbolt, inventory_hostname_short,
|
||||
username='root',
|
||||
password=root_password,
|
||||
uri='ssh://'+hostname_fqdn
|
||||
)
|
||||
}}
|
||||
environment:
|
||||
PASSBOLT_CREATE_NEW_RESOURCE: true
|
||||
- name: Change root password
|
||||
user:
|
||||
name: root
|
||||
password: "{{ root_password | password_hash('sha512') }}"
|
||||
- name: Set root password generated witness
|
||||
ini_file:
|
||||
path: /etc/vn.ini
|
||||
section: witness
|
||||
option: root_pass_changed
|
||||
value: true
|
|
@ -1,17 +0,0 @@
|
|||
- name: Configure sshd_config settings
|
||||
copy:
|
||||
dest: /etc/ssh/sshd_config.d/vn-listenipv4.conf
|
||||
content: |
|
||||
# Do not edit this file! Ansible will overwrite it.
|
||||
ListenAddress 0.0.0.0
|
||||
owner: root
|
||||
group: root
|
||||
mode: u=rw,g=r,o=r
|
||||
notify: restart sshd
|
||||
- name: Deploy custom authorized_keys for root
|
||||
copy:
|
||||
dest: /root/.ssh/authorized_keys2
|
||||
content: "{{ public_keys }}"
|
||||
owner: root
|
||||
group: root
|
||||
mode: u=rw,g=,o=
|
|
@ -1,23 +1,21 @@
|
|||
- name: Ensure directory for timesyncd custom configuration exists
|
||||
file:
|
||||
path: /etc/systemd/timesyncd.conf.d/
|
||||
state: directory
|
||||
- name: Configure /etc/systemd/timesyncd.conf
|
||||
lineinfile:
|
||||
path: /etc/systemd/timesyncd.conf
|
||||
regexp: '^#NTP'
|
||||
line: "NTP={{ time_server }}"
|
||||
owner: root
|
||||
group: root
|
||||
mode: u=rwx,g=rx,o=rx
|
||||
- name: Configure NTP settings in /etc/systemd/timesyncd.conf.d/vn-ntp.conf
|
||||
copy:
|
||||
dest: /etc/systemd/timesyncd.conf.d/vn-ntp.conf
|
||||
content: |
|
||||
[Time]
|
||||
NTP={{ time_server }}
|
||||
FallbackNTP={{ time_server_spain }}
|
||||
mode: '0644'
|
||||
- name: Configure /etc/systemd/timesyncd.conf
|
||||
lineinfile:
|
||||
path: /etc/systemd/timesyncd.conf
|
||||
regexp: '^#?FallbackNTP='
|
||||
line: "FallbackNTP=ntp.roa.es"
|
||||
owner: root
|
||||
group: root
|
||||
mode: u=rw,g=r,o=r
|
||||
mode: '0644'
|
||||
notify: restart systemd-timesyncd
|
||||
- name: Ensure systemd-timesyncd service is enabled and started
|
||||
- name: Service should start on boot
|
||||
service:
|
||||
name: systemd-timesyncd
|
||||
enabled: yes
|
||||
state: started
|
||||
|
|
|
@ -1,11 +1,2 @@
|
|||
- name: Configure debconf for tzdata
|
||||
debconf:
|
||||
name: tzdata
|
||||
question: "{{ item.question }}"
|
||||
value: "{{ item.value }}"
|
||||
vtype: "string"
|
||||
loop:
|
||||
- { question: "tzdata/Areas", value: "Europe" }
|
||||
- { question: "tzdata/Zones/Europe", value: "Madrid" }
|
||||
- { question: "tzdata/Zones/Etc", value: "UTC" }
|
||||
notify: reconfigure tzdata
|
||||
- name: Configure the time zone
|
||||
script: set-timezone.sh
|
||||
|
|
|
@ -6,6 +6,6 @@
|
|||
copy:
|
||||
src: vimrc.local
|
||||
dest: /etc/vim/
|
||||
mode: u=rw,g=r,o=r
|
||||
mode: '644'
|
||||
owner: root
|
||||
group: root
|
|
@ -1,3 +1,12 @@
|
|||
- name: Download vn-host Debian package
|
||||
get_url:
|
||||
url: "{{ vn_host.url }}/{{ vn_host.package }}"
|
||||
dest: "/tmp/{{ vn_host.package }}"
|
||||
mode: '0644'
|
||||
- name: Install package
|
||||
apt:
|
||||
deb: "{{ vn_host_url }}"
|
||||
deb: "/tmp/{{ vn_host.package }}"
|
||||
- name: Delete package
|
||||
file:
|
||||
path: "/tmp/{{ vn_host.package }}"
|
||||
state: absent
|
||||
|
|
|
@ -1,17 +0,0 @@
|
|||
- name: Check if witness INI file exists
|
||||
stat:
|
||||
path: /etc/vn.ini
|
||||
register: witness_file
|
||||
- name: Set witness related variables
|
||||
set_fact:
|
||||
vn_first_time: "{{ not witness_file.stat.exists }}"
|
||||
vn_witness_checked: true
|
||||
- when: not vn_first_time
|
||||
block:
|
||||
- name: Slurp witness INI file
|
||||
slurp:
|
||||
src: /etc/vn.ini
|
||||
register: vn_ini_file
|
||||
- name: Put witness as dictionary into variable
|
||||
set_fact:
|
||||
vn_ini: "{{ vn_ini_file.content | b64decode | community.general.from_ini }}"
|
|
@ -1,10 +1,10 @@
|
|||
Director {
|
||||
Name = bacula-dir
|
||||
Password = "{{ bacula_passwords.fdpasswd }}"
|
||||
Password = "{{ FDPASSWD }}"
|
||||
}
|
||||
Director {
|
||||
Name = bacula-mon
|
||||
Password = "{{ bacula_passwords.fdmpasswd }}"
|
||||
Password = "{{ FDMPASSWD }}"
|
||||
Monitor = yes
|
||||
}
|
||||
FileDaemon {
|
||||
|
|
|
@ -14,9 +14,7 @@ action = %(action_)s
|
|||
#+++++++++++++++ Jails
|
||||
|
||||
[sshd]
|
||||
ignoreip = 127.0.0.1/8
|
||||
enabled = true
|
||||
port = 0:65535
|
||||
filter = sshd
|
||||
logpath = {{ fail2ban.logpath }}
|
||||
action = %(action_mwl)s
|
||||
logpath = %(sshd_log)s
|
||||
|
|
|
@ -1,5 +1,4 @@
|
|||
allowed_hosts={{ nagios_server }}
|
||||
server_address={{ ansible_default_ipv4.address }}
|
||||
|
||||
command[check_disk_root]=/usr/lib/nagios/plugins/check_disk -w 10% -c 5% -p /
|
||||
command[check_disk_var]=/usr/lib/nagios/plugins/check_disk -w 10% -c 5% -p /var
|
||||
|
|
|
@ -0,0 +1,3 @@
|
|||
vn_host:
|
||||
url: http://apt.verdnatura.es/pool/main/v/vn-host
|
||||
package: vn-host_2.0.2_all.deb
|
|
@ -2,7 +2,5 @@
|
|||
service:
|
||||
name: nslcd
|
||||
state: restarted
|
||||
- name: restart-ssh
|
||||
systemd:
|
||||
name: ssh
|
||||
state: restarted
|
||||
- name: pam-update-ldap
|
||||
shell: pam-auth-update --enable ldap
|
||||
|
|
|
@ -11,7 +11,7 @@
|
|||
mode: '0640'
|
||||
notify:
|
||||
- restart-nslcd
|
||||
register: nslcd
|
||||
- pam-update-ldap
|
||||
- name: Configure nsswitch to use NSLCD
|
||||
lineinfile:
|
||||
dest: /etc/nsswitch.conf
|
||||
|
|
|
@ -2,5 +2,3 @@
|
|||
tags: auth
|
||||
- import_tasks: sudoers.yml
|
||||
tags: sudoers
|
||||
- import_tasks: ssh_keys.yml
|
||||
tags: ssh_keys
|
|
@ -1,21 +0,0 @@
|
|||
- name: Set the SSH keys generated witness variable
|
||||
set_fact:
|
||||
ssh_keys_generated: "{{ vn_ini.witness.ssh_keys_generated | default(false) }}"
|
||||
- when: vn_witness_checked and not ssh_keys_generated
|
||||
block:
|
||||
- name: Generate SSH key pairs
|
||||
openssh_keypair:
|
||||
path: "/etc/ssh/ssh_host_{{ item.type }}_key"
|
||||
type: "{{ item.type }}"
|
||||
force: yes
|
||||
loop:
|
||||
- { type: 'rsa' }
|
||||
- { type: 'ecdsa' }
|
||||
- { type: 'ed25519' }
|
||||
notify: restart sshd
|
||||
- name: Set SSH keys generated witness
|
||||
ini_file:
|
||||
path: /etc/vn.ini
|
||||
section: witness
|
||||
option: ssh_keys_generated
|
||||
value: true
|
|
@ -8,7 +8,7 @@ idle_timelimit 60
|
|||
|
||||
base {{ ldap_base }}
|
||||
binddn cn=nss,ou=admins,{{ ldap_base }}
|
||||
bindpw {{ lookup(passbolt, 'nslcd', folder_parent_id=passbolt_folder).password }}
|
||||
bindpw {{ nslcd_password }}
|
||||
pagesize 500
|
||||
|
||||
filter group (&(objectClass=posixGroup)(cn={{ sysadmin_group }}))
|
||||
|
|
|
@ -1,4 +0,0 @@
|
|||
vm.swappiness=10
|
||||
vm.dirty_ratio=30
|
||||
vm.dirty_background_ratio=5
|
||||
net.core.somaxconn=65536
|
|
@ -1,7 +0,0 @@
|
|||
net.core.rmem_max=134217728
|
||||
net.core.wmem_max=134217728
|
||||
net.core.netdev_max_backlog=250000
|
||||
net.ipv4.tcp_rmem=4096 87380 67108864
|
||||
net.ipv4.tcp_wmem=4096 65536 67108864
|
||||
net.ipv4.tcp_congestion_control=htcp
|
||||
net.ipv4.tcp_mtu_probing=1
|
|
@ -1,3 +0,0 @@
|
|||
net.ipv6.conf.all.disable_ipv6=1
|
||||
net.ipv6.conf.default.disable_ipv6=1
|
||||
net.ipv6.conf.lo.disable_ipv6=1
|
|
@ -1,4 +0,0 @@
|
|||
- name: restart-sysctl
|
||||
systemd:
|
||||
name: systemd-sysctl
|
||||
state: restarted
|
|
@ -1,12 +0,0 @@
|
|||
- name: Stop AppArmor
|
||||
systemd:
|
||||
name: apparmor
|
||||
state: stopped
|
||||
- name: Disable AppArmor service
|
||||
systemd:
|
||||
name: apparmor
|
||||
enabled: no
|
||||
- name: Mask AppArmor service
|
||||
systemd:
|
||||
name: apparmor
|
||||
masked: yes
|
|
@ -1,9 +0,0 @@
|
|||
- name: Set the hostname
|
||||
hostname:
|
||||
name: "{{ inventory_hostname_short }}"
|
||||
use: debian
|
||||
- name: Populating hosts file with hostname
|
||||
lineinfile:
|
||||
path: /etc/hosts
|
||||
regexp: '^127\.0\.1\.1'
|
||||
line: '127.0.1.1 {{ hostname_fqdn }} {{ inventory_hostname_short }}'
|
|
@ -1,6 +0,0 @@
|
|||
- import_tasks: hostname.yml
|
||||
tags: hostname
|
||||
- import_tasks: sysctl.yml
|
||||
tags: sysctl
|
||||
- import_tasks: apparmor.yml
|
||||
tags: apparmor
|
|
@ -1,8 +0,0 @@
|
|||
- name: Set systctl custom vn configuration
|
||||
copy:
|
||||
src: sysctl/
|
||||
dest: /etc/sysctl.d/
|
||||
owner: root
|
||||
group: root
|
||||
mode: u=rw,g=r,o=r
|
||||
notify: restart-sysctl
|
|
@ -1,8 +1 @@
|
|||
homes_path: /mnt/homes
|
||||
autofs_packages:
|
||||
- nfs-common
|
||||
- autofs
|
||||
- libnfs-utils
|
||||
- autofs-ldap
|
||||
blacklist_module_kernel: |
|
||||
blacklist snd_hda_intel
|
||||
|
|
|
@ -1,7 +1,12 @@
|
|||
- name: Install autofs packages
|
||||
apt:
|
||||
name: "{{ autofs_packages }}"
|
||||
name: "{{ item }}"
|
||||
state: present
|
||||
with_items:
|
||||
- nfs-common
|
||||
- autofs
|
||||
- libnfs-utils
|
||||
- autofs-ldap
|
||||
- name: Create homes directory
|
||||
file:
|
||||
path: "{{ homes_path }}"
|
||||
|
@ -28,6 +33,6 @@
|
|||
mode: '0644'
|
||||
notify: restart-autofs
|
||||
- name: Service autofs service
|
||||
systemd:
|
||||
service:
|
||||
name: autofs
|
||||
enabled: yes
|
|
@ -1,7 +0,0 @@
|
|||
- name: Configure blacklist modprobe on VM
|
||||
copy:
|
||||
content: "{{ blacklist_module_kernel }}"
|
||||
dest: /etc/modprobe.d/vn-blacklist.conf
|
||||
owner: root
|
||||
group: root
|
||||
mode: u=rw,g=r,o=r
|
|
@ -12,7 +12,7 @@
|
|||
mode: u=rw,g=r,o=r
|
||||
owner: root
|
||||
group: root
|
||||
register: grub
|
||||
- name: Generate GRUB configuration
|
||||
command: update-grub
|
||||
when: grub.changed
|
||||
- include_role:
|
||||
name: linux-autofs
|
||||
|
|
|
@ -4,5 +4,3 @@
|
|||
tags: hotplug
|
||||
- import_tasks: autofs.yml
|
||||
tags: autofs
|
||||
- import_tasks: blacklist.yml
|
||||
tags: blacklist
|
||||
|
|
|
@ -0,0 +1,23 @@
|
|||
# https://docs.ansible.com/ansible/latest/collections/ansible/builtin/hostname_module.html#ansible-collections-ansible-builtin-hostname-module
|
||||
|
||||
- name: Set the hostname in /etc/hostname
|
||||
ansible.builtin.hostname:
|
||||
name: "{{ hostname }}"
|
||||
use: debian
|
||||
- name: Replace /etc/hosts
|
||||
template:
|
||||
src: hosts.j2
|
||||
dest: /etc/hosts
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
backup: true
|
||||
- name: Replace /etc/resolv.conf
|
||||
template:
|
||||
src: resolv.j2
|
||||
dest: /etc/resolv.conf
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
backup: true
|
||||
when: resolv_enabled
|
|
@ -0,0 +1,5 @@
|
|||
{% if hosts is defined %}
|
||||
{% for host in hosts %}
|
||||
{{host.ip}} {{hostname}}
|
||||
{% endfor %}
|
||||
{% endif %}
|
|
@ -1,5 +1,5 @@
|
|||
domain {{ host_domain }}
|
||||
search {{ host_domain }}
|
||||
domain {{ resolv.domain }}
|
||||
search {{ resolv.search }}
|
||||
{% if resolvers is defined %}
|
||||
{% for resolver in resolvers %}
|
||||
nameserver {{resolver}}
|
|
@ -0,0 +1,2 @@
|
|||
- name: grub-register
|
||||
command: update-grub
|
|
@ -0,0 +1,7 @@
|
|||
- name: GRUB boot password protection
|
||||
blockinfile:
|
||||
path: /etc/grub.d/40_custom
|
||||
block: |
|
||||
set superusers="{{ grub_user }}"
|
||||
password_pbkdf2 {{ grub_user }} {{ grub_code }}
|
||||
notify: grub-register
|
|
@ -0,0 +1 @@
|
|||
grub_user: admin
|
|
@ -1,127 +0,0 @@
|
|||
#!/usr/bin/env perl
|
||||
#===============================================================================
|
||||
# DESCRIPTION: Icinga2 / Nagios Check for chrony time sync status and offset
|
||||
#
|
||||
# OPTIONS: -h : Help
|
||||
# -w [warning threshold in seconds]
|
||||
# -c [critical threshold in seconds]
|
||||
#
|
||||
# REQUIREMENTS: Chrony, perl version 5.10.1+
|
||||
#
|
||||
# AUTHOR: Dennis Ullrich (request@decstasy.de)
|
||||
#
|
||||
# BUGS ETC: https://github.com/Decstasy/check_chrony
|
||||
#
|
||||
# LICENSE: GPL v3 (GNU General Public License, Version 3)
|
||||
# see https://www.gnu.org/licenses/gpl-3.0.txt
|
||||
#===============================================================================
|
||||
|
||||
use 5.10.1;
|
||||
use strict;
|
||||
use warnings;
|
||||
use utf8;
|
||||
use Getopt::Std;
|
||||
|
||||
#
|
||||
# Variables
|
||||
#
|
||||
my $chronyDaemonName = "chronyd";
|
||||
my $leapOk = "Normal";
|
||||
|
||||
my $rc = 3;
|
||||
my $msg= "";
|
||||
my $perfdata = "";
|
||||
|
||||
#
|
||||
# Subroutines
|
||||
#
|
||||
|
||||
sub help {
|
||||
print "check_chrony [options]
|
||||
-w [warning threshold in seconds]
|
||||
-c [critical threshold in seconds]
|
||||
e.g.: check_chrony -w 0.6 -c 2\n";
|
||||
exit(3);
|
||||
}
|
||||
|
||||
# Script exit with Nagios / Icinga typical output
|
||||
sub _exit {
|
||||
my ( $return, $line ) = @_;
|
||||
my @state = ( "OK", "WARNING", "CRITICAL", "UNKNOWN" );
|
||||
print "$state[$return]: $line\n";
|
||||
exit( $return );
|
||||
}
|
||||
|
||||
# Checks if a process with $_[0] as name exists
|
||||
sub proc_exists {
|
||||
my $PID = `ps -C $_[0] -o pid=`;
|
||||
if ( ${^CHILD_ERROR_NATIVE} == 0 ){
|
||||
return 1;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
#
|
||||
# Options
|
||||
#
|
||||
|
||||
my %options=();
|
||||
getopts( "hw:c:", \%options );
|
||||
|
||||
# Check input
|
||||
if ( keys %options == 0 || defined $options{h} ){
|
||||
&help;
|
||||
}
|
||||
|
||||
for my $key ( keys %options ){
|
||||
if ( $options{$key} !~ /^[\d\.]+$/ ){
|
||||
&_exit( 3, "Value of option -$key is not a valid number!" );
|
||||
}
|
||||
}
|
||||
|
||||
#
|
||||
# Check chrony process
|
||||
#
|
||||
|
||||
&_exit( 2, "$chronyDaemonName is not running!" ) if not &proc_exists( $chronyDaemonName );
|
||||
|
||||
#
|
||||
# Get tracking data
|
||||
#
|
||||
|
||||
my $chronyOutput = `chronyc tracking`;
|
||||
&_exit( 3, "Chronyc tracking command failed!" ) if ${^CHILD_ERROR_NATIVE} != 0;
|
||||
|
||||
my ( $offset, $dir ) = $chronyOutput =~ /(?:System\stime)[^\d]+([\d\.]+)(?:.*?)(fast|slow)/;
|
||||
my ( $leap ) = $chronyOutput =~ /(?:Leap)[^\:]+(?::\s+)([\w\h]+)/;
|
||||
|
||||
#
|
||||
# Check stuff
|
||||
#
|
||||
|
||||
# Check offset
|
||||
if ( $offset >= $options{"c"} ){
|
||||
$rc = 2; # Critical
|
||||
}
|
||||
elsif ( $offset >= $options{"w"} ){
|
||||
$rc = 1; # Warning
|
||||
}
|
||||
else {
|
||||
$rc = 0; # Ok
|
||||
}
|
||||
|
||||
# Prepare offset performace data
|
||||
$offset = $dir =~ "slow" ? "-$offset" : "+$offset";
|
||||
$msg = sprintf( "Time offset of %+.9f seconds to reference.", $offset);
|
||||
$perfdata = sprintf( "|offset=%.9fs;%.9f;%.9f", ${offset}, $options{'w'}, $options{'c'});
|
||||
|
||||
# Check leap
|
||||
if( $leap !~ $leapOk ){
|
||||
&_exit( 2, "Chrony leap status \"$leap\" is not equal to \"$leapOk\"! $msg $perfdata" );
|
||||
}
|
||||
|
||||
#
|
||||
# Return stuff
|
||||
#
|
||||
|
||||
&_exit($rc, "$msg $perfdata");
|
|
@ -1,22 +0,0 @@
|
|||
#!/bin/bash
|
||||
# Checks status of disks SMART
|
||||
|
||||
STATUS_LABEL="SMART Health Status:"
|
||||
STATUS_OK="$STATUS_LABEL OK"
|
||||
|
||||
if [[ "$#" == "0" ]]; then
|
||||
echo "Usage: $0 <disk1> [<disk2> ... <diskX>]"
|
||||
exit
|
||||
fi
|
||||
|
||||
for DISK in "$@"
|
||||
do
|
||||
STATUS=$(sudo /usr/sbin/smartctl -H -d scsi "$DISK" | grep "$STATUS_LABEL")
|
||||
|
||||
if [ "$STATUS" != "$STATUS_OK" ]; then
|
||||
echo "CRITICAL: $DISK: $STATUS"
|
||||
exit 2
|
||||
fi
|
||||
done
|
||||
|
||||
echo "OK: $STATUS_OK"
|
|
@ -1,120 +0,0 @@
|
|||
#!/usr/bin/perl
|
||||
|
||||
use strict;
|
||||
use warnings;
|
||||
use English;
|
||||
|
||||
$ENV{'PATH'} = "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin";
|
||||
|
||||
use constant N_OK => 0;
|
||||
use constant N_WARNING => 1;
|
||||
use constant N_CRITICAL => 2;
|
||||
use constant N_MSG => [ "OK", "WARNING", "CRITICAL" ];
|
||||
|
||||
my @zpool = ();
|
||||
|
||||
sub get_pools() {
|
||||
local *P;
|
||||
my $zpool_cmd = $EUID == 0 ? "zpool" : "sudo zpool";
|
||||
open(P, $zpool_cmd . " list -H 2>&1 |") or &nagios_response("Could not find zpool command", N_CRITICAL);
|
||||
while (<P>) {
|
||||
chomp;
|
||||
my @ret = split(/\s+/, $_);
|
||||
push(@zpool, {
|
||||
'name' => $ret[0],
|
||||
'health' => $ret[-2],
|
||||
'size' => $ret[1],
|
||||
'alloc' => $ret[2],
|
||||
'free' => $ret[3]
|
||||
});
|
||||
}
|
||||
close(P);
|
||||
my $rc = $?;
|
||||
if ($rc != 0) {
|
||||
&nagios_response("zpool list command failed (rc=$rc)", N_CRITICAL);
|
||||
}
|
||||
}
|
||||
|
||||
sub get_status()
|
||||
{
|
||||
my $storage = shift || "unknown";
|
||||
my $cat = 0;
|
||||
my $res = {};
|
||||
local *P;
|
||||
my $zpool_cmd = $EUID == 0 ? "zpool" : "sudo zpool";
|
||||
open(P, $zpool_cmd . " status $storage 2>&1 |") or &nagios_response("Could not find zpool command", N_CRITICAL);
|
||||
while (<P>) {
|
||||
chomp;
|
||||
if ($_ =~ /^\s*([^\s]+):\s*(.*)$/) {
|
||||
$cat = $1;
|
||||
$res->{"$cat"} = ();
|
||||
if ($2) {
|
||||
push(@{$res->{"$cat"}}, $2);
|
||||
}
|
||||
} elsif ($cat && $_ =~ /^\s+(.+)$/) {
|
||||
push(@{$res->{"$cat"}}, $1);
|
||||
}
|
||||
}
|
||||
close(P);
|
||||
my $rc = $?;
|
||||
if ($rc != 0) {
|
||||
&nagios_response("zpool status command failed (rc=$rc)", N_CRITICAL);
|
||||
}
|
||||
return $res;
|
||||
}
|
||||
|
||||
sub nagios_response()
|
||||
{
|
||||
my $msg = shift || "Unknown";
|
||||
my $exit_status = shift;
|
||||
if (!defined($exit_status)) {
|
||||
$exit_status = N_CRITICAL;
|
||||
}
|
||||
printf("%s %s\n", N_MSG->[$exit_status], $msg);
|
||||
exit($exit_status);
|
||||
}
|
||||
|
||||
sub main() {
|
||||
|
||||
&get_pools();
|
||||
my $exit_status = N_OK;
|
||||
my @out = ();
|
||||
foreach my $pool (@zpool) {
|
||||
if ($pool->{'health'} eq 'DEGRADED') {
|
||||
$exit_status = N_WARNING;
|
||||
my $extinfo = &get_status($pool->{'name'});
|
||||
my $scanned = 0;
|
||||
my $total = 0;
|
||||
my $speed = 0;
|
||||
my $left = 0;
|
||||
my $percent = 0;
|
||||
my $resilvered = 0;
|
||||
if (defined($extinfo->{'scan'})) {
|
||||
foreach my $line (@{$extinfo->{'scan'}}) {
|
||||
if ($line =~ /^\s*([^\s]+)\s+scanned out of\s+([^\s]+)\s+at\s+([^\s]+),\s*([^\s]+)\s+to go/) {
|
||||
$scanned = $1;
|
||||
$total = $2;
|
||||
$speed = $3;
|
||||
$left = $4;
|
||||
} elsif ($line =~ /^\s*([^\s]+)\s+resilvered,\s*([^\s]+)\s+done/) {
|
||||
$resilvered = $1;
|
||||
$percent = $2;
|
||||
}
|
||||
}
|
||||
}
|
||||
if ($scanned && length($scanned) > 2) {
|
||||
push(@out, sprintf("%s(RESILVER %s,%s,%s)", $pool->{'name'}, $percent, $speed, $left));
|
||||
} else {
|
||||
push(@out, sprintf("%s(%s %s/%s)", $pool->{'name'}, $pool->{'health'}, $pool->{'alloc'}, $pool->{'size'}));
|
||||
}
|
||||
} elsif ($pool->{'health'} ne 'ONLINE') {
|
||||
$exit_status = N_WARNING;
|
||||
push(@out, sprintf("%s(%s %s/%s)", $pool->{'name'}, $pool->{'health'}, $pool->{'alloc'}, $pool->{'size'}));
|
||||
} else {
|
||||
push(@out, sprintf("%s(%s %s/%s)", $pool->{'name'}, $pool->{'health'}, $pool->{'alloc'}, $pool->{'size'}));
|
||||
}
|
||||
}
|
||||
&nagios_response(join(",", @out), $exit_status);
|
||||
}
|
||||
|
||||
&main();
|
|
@ -2,7 +2,3 @@
|
|||
service:
|
||||
name: nagios-nrpe-server
|
||||
state: restarted
|
||||
- name: restart-sysctl
|
||||
service:
|
||||
name: systemd-sysctl
|
||||
state: restarted
|
||||
|
|
|
@ -1,4 +1,22 @@
|
|||
- import_tasks: nrpe.yml
|
||||
tags: nrpe
|
||||
- import_tasks: vhost.yml
|
||||
tags: vhost
|
||||
- name: Set NRPE PVE configuration
|
||||
copy:
|
||||
src: nrpe.cfg
|
||||
dest: /etc/nagios/nrpe.d/95-pve.cfg
|
||||
owner: root
|
||||
group: root
|
||||
mode: u=rw,g=r,o=r
|
||||
notify: restart-nrpe
|
||||
- name: Add nagios to sudoers
|
||||
copy:
|
||||
src: sudoers
|
||||
dest: /etc/sudoers.d/nagios
|
||||
mode: u=rw,g=r,o=
|
||||
owner: root
|
||||
group: root
|
||||
- name: Configure memory regions
|
||||
copy:
|
||||
src: vhost.conf
|
||||
dest: /etc/modprobe.d/
|
||||
mode: u=rw,g=r,o=r
|
||||
owner: root
|
||||
group: root
|
||||
|
|
|
@ -1,24 +0,0 @@
|
|||
- name: Set NRPE PVE configuration
|
||||
copy:
|
||||
src: nrpe.cfg
|
||||
dest: /etc/nagios/nrpe.d/95-pve.cfg
|
||||
owner: root
|
||||
group: root
|
||||
mode: u=rw,g=r,o=r
|
||||
notify: restart-nrpe
|
||||
- name: Copy PVE NRPE plugins
|
||||
copy:
|
||||
src: nrpe/
|
||||
dest: /etc/nagios/plugins/
|
||||
owner: root
|
||||
group: root
|
||||
mode: u=rwx,g=rx,o=rx
|
||||
notify: restart-nrpe
|
||||
- name: Add nagios to sudoers
|
||||
copy:
|
||||
src: sudoers
|
||||
dest: /etc/sudoers.d/nagios
|
||||
mode: u=rw,g=r,o=
|
||||
owner: root
|
||||
group: root
|
||||
notify: restart-nrpe
|
|
@ -1,8 +0,0 @@
|
|||
- name: Configure memory regions
|
||||
copy:
|
||||
src: vhost.conf
|
||||
dest: /etc/modprobe.d/
|
||||
mode: u=rw,g=r,o=r
|
||||
owner: root
|
||||
group: root
|
||||
notify: restart-sysctl
|
|
@ -1,13 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
EXTRA_ARGS=()
|
||||
|
||||
if [ -f .passbolt.yml ]; then
|
||||
EXTRA_ARGS+=("--extra-vars" "@.passbolt.yml")
|
||||
fi
|
||||
if [ -f .vault-pass ]; then
|
||||
EXTRA_ARGS+=("--vault-password-file" ".vault-pass")
|
||||
fi
|
||||
|
||||
#export PYTHONPATH=./venv/lib/python3.12/site-packages/
|
||||
ansible-playbook ${EXTRA_ARGS[@]} $@
|
|
@ -0,0 +1,2 @@
|
|||
#!/bin/bash
|
||||
ansible-playbook --vault-password-file .vaultpass $@
|
|
@ -0,0 +1,26 @@
|
|||
$ANSIBLE_VAULT;1.1;AES256
|
||||
37396535616365346266643936343463336564303066356131363064633436353763343735666563
|
||||
3234623639383039393735346632636163623435313965660a363363386637666261626661336333
|
||||
39643436663965383239323435613339323766623630633430343465313038643235636666343938
|
||||
3531636532613661650a336631666138306166346363333534613436396565343161623838363132
|
||||
30643532636332356630306563336165663266663237326262336533363665653230393332623134
|
||||
63626333303134346435666231386361643137636132383236373937636235326132666230306362
|
||||
36363136653963366235626239656339663736393636663136656164393031323663623463393438
|
||||
63646635343462363332636531323634623930643737333430613666366335303362323764363533
|
||||
39336533366466633132383438633063616564623862366263376638323138623363656164343635
|
||||
64346437646435383137313162656237303436343839366261633935613735316166376466616635
|
||||
61616132626539656633353032663932653730633365633331313330323932653465656634383334
|
||||
64633634326462316164316130373334666365643936646634333032326465373131656161646234
|
||||
30376135613534303533326133383661353235343034356466333961396237373937353137373735
|
||||
32373633396438313133663839373663656139346163386336373265356265613038646633386334
|
||||
37353331373332373636346166333639343936633464663335653762386431376632613430363666
|
||||
66636139663662633861643733306238646335353664636265623464393163343462326239613662
|
||||
63633236326161643838353931646566323236326636376331663463333664636566666462303063
|
||||
31303436356164623234346362386633633633623230366366393839376239636533636564666663
|
||||
39663034373664663063656561306132383734646263656464626432633963396638363362396664
|
||||
37303038373038346536613235333237613435663632656334643334326232396336653035326162
|
||||
63663637306531373030643962386339393263653262363037626538386132353363663761363138
|
||||
62663532313862396339653364306533326639333139336636343762373038333838313762393431
|
||||
34386239303765653930306334393339383234303137346461633231353637326137353964613832
|
||||
61353035353539633334333337346665383937346566396438306465336337366661323435616133
|
||||
37643932306265633465643430636662653865313661663331316662303861356466
|
Loading…
Reference in New Issue