Skip to content

Conversation

AlexMcDermott
Copy link

@AlexMcDermott AlexMcDermott commented Sep 16, 2025

Hi, cool project

Contents

Podman Hostname issue

I was running into this issue and found it more reliable to use the hostname from the socket library instead.

[alexmcdermott@grom:~/homelab/containers/stack-back]$ podman exec -it stack-back rcb backup
Traceback (most recent call last):
File "/restic-compose-backup/.venv/bin/rcb", line 10, in <module>
sys.exit(main())
^^^^^^
File "/restic-compose-backup/restic_compose_backup/cli.py", line 23, in main
containers = RunningContainers()
^^^^^^^^^^^^^^^^^^^
File "/restic-compose-backup/restic_compose_backup/containers.py", line 411, in **init**
if container_data.get("Id").startswith(os.environ["HOSTNAME"]):
~~~~~~~~~~^^^^^^^^^^^^
File "<frozen os>", line 714, in **getitem**
KeyError: 'HOSTNAME'

Podman network issue

This was triggered when the internal backup container is created. Because it was trying to use the main backup containers network, without a pod being specified, and not started as part of the same compose file. My solution for this was to pass through the network name of the original container and connect to that. I also switch the DB connection to use the container IP address.

[alexmcdermott@grom:~/homelab]$ podman exec -it stack-back rcb backup
2025-09-15 21:55:10,130 - INFO: Starting backup container
2025-09-15 21:55:11,391 - ERROR: 500 Server Error for http+docker://localhost/v1.41/containers/create: Internal Server Error ("container create: container dependency b68b86daed653b6a34f74873cf14ca0550bda075cb7c21d4a5354f82ee0b84a9 is part of a pod, but container is not: invalid argument")
Traceback (most recent call last):
File "/restic-compose-backup/.venv/lib/python3.12/site-packages/docker/api/client.py", line 275, in *raise*for_status
response.raise_for_status()
File "/restic-compose-backup/.venv/lib/python3.12/site-packages/requests/models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.41/containers/create

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/restic-compose-backup/restic_compose_backup/cli.py", line 167, in backup
result = backup_runner.run(
^^^^^^^^^^^^^^^^^^
File "/restic-compose-backup/restic_compose_backup/backup_runner.py", line 20, in run
container = client.containers.run(
^^^^^^^^^^^^^^^^^^^^^^
File "/restic-compose-backup/.venv/lib/python3.12/site-packages/docker/models/containers.py", line 876, in run
container = self.create(image=image, command=command,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/restic-compose-backup/.venv/lib/python3.12/site-packages/docker/models/containers.py", line 935, in create
resp = self.client.api.create_container(**create_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/restic-compose-backup/.venv/lib/python3.12/site-packages/docker/api/container.py", line 440, in create_container
return self.create_container_from_config(config, name, platform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/restic-compose-backup/.venv/lib/python3.12/site-packages/docker/api/container.py", line 457, in create_container_from_config
return self._result(res, True)
^^^^^^^^^^^^^^^^^^^^^^^
File "/restic-compose-backup/.venv/lib/python3.12/site-packages/docker/api/client.py", line 281, in *result
self.*raise_for_status(response)
File "/restic-compose-backup/.venv/lib/python3.12/site-packages/docker/api/client.py", line 277, in *raise*for_status
raise create_api_error_from_http_exception(e) from e
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/restic-compose-backup/.venv/lib/python3.12/site-packages/docker/errors.py", line 39, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation) from e
docker.errors.APIError: 500 Server Error for http+docker://localhost/v1.41/containers/create: Internal Server Error ("container create: container dependency b68b86daed653b6a34f74873cf14ca0550bda075cb7c21d4a5354f82ee0b84a9 is part of a pod, but container is not: invalid argument")
2025-09-15 21:55:11,396 - INFO: No alerts configured

Copy link
Owner

@lawndoc lawndoc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice simplification of the container filter loop. Please see my comment about the shared network stack

@lawndoc
Copy link
Owner

lawndoc commented Sep 24, 2025

more reliable to use the hostname from the socket library instead.

This is fine, using socket should be backward compatible

@AlexMcDermott AlexMcDermott marked this pull request as ready for review September 27, 2025 10:11
@AlexMcDermott AlexMcDermott changed the title Podman support and option of backup across all projects Podman support and option to backup across all projects Oct 15, 2025
@lawndoc
Copy link
Owner

lawndoc commented Oct 18, 2025

Doing some testing of this PR now, sorry it's taken me so long to get to this. Everything is going well in my previous test setup. Looks like the container filter changes will also fix #62 which is a nice bonus.

Still need to test:

  • database backup with multiple docker networks after the changes to the backup container's network stack
  • the new config variable to backup all projects with one backup container

@lawndoc
Copy link
Owner

lawndoc commented Oct 18, 2025

@AlexMcDermott your code is working as expected for individual projects, but it fails when backing up databases in other compose projects. This is because docker compose will create different networks for each compose project.

You can import other networks into your compose project, but that's not a good user experience.

Instead, I think we can give the backup container access to every network that it will need access to, instead of just the network defined for the main backup container. I actually like this better anyways, because then we don't have to mess with the backup container's network even within a single compose project.

Comment on lines +60 to +65
def network_details(self) -> dict:
"""dict: The network details of the container"""
network_settings: dict = self._data.get("NetworkSettings", {})
networks: dict = network_settings.get("Networks", {})
first_network = list(networks.values())[0]
return first_network
Copy link
Owner

@lawndoc lawndoc Oct 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will need to return the full list of networks to be passed into the backup runner startup step later so we can call Network.connect() to add our backup runner to each network.

Suggested change
def network_details(self) -> dict:
"""dict: The network details of the container"""
network_settings: dict = self._data.get("NetworkSettings", {})
networks: dict = network_settings.get("Networks", {})
first_network = list(networks.values())[0]
return first_network
def network_details(self) -> list:
"""list: The network details of the container"""
network_settings: dict = self._data.get("NetworkSettings", {})
networks: dict = network_settings.get("Networks", {})
return list(networks.values())

Comment on lines +67 to +70
@property
def network_name(self) -> str:
"""str: The name of the network the container is connected to"""
return self.network_details.get("NetworkID", "")
Copy link
Owner

@lawndoc lawndoc Oct 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is still needed for the initial network creation, but you will need to adapt this to select the first network within this function since network_details is now returning a list.

Suggested change
@property
def network_name(self) -> str:
"""str: The name of the network the container is connected to"""
return self.network_details.get("NetworkID", "")
@property
def network_name(self) -> str:
"""str: The name of the network the container is connected to"""
return self.network_details[0].get("NetworkID", "")

Comment on lines +72 to +75
@property
def ip_address(self) -> str:
"""str: IP address of the container"""
return self.network_details.get("IPAddress", "")
Copy link
Owner

@lawndoc lawndoc Oct 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can adapt this to select the first network within this function since network_details is now returning a list

Suggested change
@property
def ip_address(self) -> str:
"""str: IP address of the container"""
return self.network_details.get("IPAddress", "")
@property
def ip_address(self) -> str:
"""str: IP address of the container"""
return self.network_details[0].get("IPAddress", "")

working_dir=os.getcwd(),
tty=True,
)

Copy link
Owner

@lawndoc lawndoc Oct 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After the above containers.create, this is where you'll iterate through the networks and Network.connect(container).

Then container.start()

Suggested change
networks = client.networks.list(ids=backup_networks)
for network in networks:
network.connect(container)
container.start()

environment: dict = None,
labels: dict = None,
source_container_id: str = None,
source_container_network: str = None,
Copy link
Owner

@lawndoc lawndoc Oct 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'll need to pass in the list of networks for the runner to be attached to so it can reach all the containers it is backing up.

Suggested change
source_container_network: str = None,
source_container_network: str = None,
backup_networks: list = None,

volumes=volumes,
environment=containers.this_container.environment,
source_container_id=containers.this_container.id,
source_container_network=containers.this_container.network_name,
Copy link
Owner

@lawndoc lawndoc Oct 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'll have to pass the backup_networks in here from the suggested new RunningContainers.networks_for_backup method

Suggested change
source_container_network=containers.this_container.network_name,
source_container_network=containers.this_container.network_name,
backup_networks=containers.networks_for_backup(),

def containers_for_backup(self):
"""Obtain all containers with backup enabled"""
return [container for container in self.containers if container.backup_enabled]

Copy link
Owner

@lawndoc lawndoc Oct 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'll need to have a networks_for_backup method to get all of the networks needed for the backup runner.

Suggested change
def networks_for_backup(self) -> list:
"""Obtain all networks that have database containers to backup"""
networks = set()
for container in self.containers_for_backup():
if container.database_backup_enabled:
for network in container.network_details:
networks.add(network.get("NetworkID", ""))
return list(networks)

logger.info("Starting backup container")
client = utils.docker_client()

container = client.containers.run(
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's change this to client.containers.create so that we can do Network.connect to add the networks it needs before starting the backup process.

Suggested change
container = client.containers.create(

Copy link
Owner

@lawndoc lawndoc Oct 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might need to double check the code if you apply my suggestion directly in GitHub. The PR comment is rendering the suggested change wrong on my end and it looks like it is going to replace a blank line rather than the intended .run method.

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants