Skip to content

Conversation

@fulminemizzega
Copy link

This commit fixes issue #528 by adding a default value to parameters layers and outputformat. This change aligns the behavior with podman-remote.

@inknos take a look, I've also added a test for this PR

@inknos
Copy link
Contributor

inknos commented Jun 23, 2025

/packit retest-failed

@inknos inknos self-requested a review June 23, 2025 11:31
Copy link
Contributor

@inknos inknos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks @fulminemizzega for this PR. please address my comments and we'll move forward :) also don't hesitate to ask questions or raise concerns

Comment on lines 207 to 208
def default(value, def_value):
return def_value if value is None else value
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

having a function for this is overkill, parameters can be defined as default in kwargs calls.

consider using kwargs.get(value, default) like here

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not know of this. Makes more sense

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be ok now. I'm a bit on the fence on this, I started thinking about using an enum type for the outputformat parameter, so I've played a bit with the code in another branch in my fork and removed all the kwargs thing and moved all the keys to function parameters, with type annotations and default values. Why do so many functions in this project use kwargs and a _render_param function? Is this something that could be useful or is it better if I leave it alone?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do so many functions in this project use kwargs

Historical reasons probably?

@jwhonce might give us a good answer

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fulminemizzega kwargs was a good escape hatch for podman-py to ease the migration for scripts from docker-py. It allowed developers access to podman features without having to port their whole scripts. From there I suspect there are areas where new development followed the old form even if it didn't make as much sense.

The flexibility of args / kwargs have always been very Pythonic and pre-date type hinting.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, thanks for your answer. Would there be any value in a PR for moving most of the kwargs to typed parameters for at least images.build? I've tried to do it but my changes do break existing code (I made all the strings that are used for paths become a pathlib.Path, so the argument has to be Path("something") instead of just "something").

self.assertIsNotNone(image)
self.assertIsNotNone(image.id)

def test_build_cache(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

more tests are probably needed to ensure that the defaults are passed, and images.build also accepts the other parameters

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are defaults like those above, and then there is dockerfile that is generated randomly... maybe this is something that should be done in a unit test?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are defaults like those above, and then there is dockerfile that is generated randomly... maybe this is something that should be done in a unit test?

I think you can do all through integration tests by inspecting the request that are passed. if you want I can write you some pseudo-code to help understand how I would design it

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes please, I’m not familiar with this kind of stuff… how would I inspect what is requested? I’ve seen how it is mocked in unit tests, is it related?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes please, I’m not familiar with this kind of stuff… how would I inspect what is requested? I’ve seen how it is mocked in unit tests, is it related?

Actually my bad, I was looking into many things at the same time and got confused. You are correct, unit tests is the place where you check you function. Also yes, you said it right, you need to mock a request and test that your image build calls the parameters correctly in the default/non-default cases.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

more tests are probably needed to ensure that the defaults are passed, and images.build also accepts the other parameters

About the other parameters, you mean that, besides the new defaults, there should be a test should that uses all of them and inspects the request? In that case, there is another issue: while experimenting with the parameter-argument conversion (see above), I discovered that _render_params parses the "remote" key, it is not documented, it can not be used alone because _render_params requires either "path" or "fileobj" (and both are meaningless if remote is used), but other than that the build function is able to handle this, because body is initialized with None, and when all the if/elif fail it stays None which is right for remote. But if dockerfile is None, then it is generated randomly in _render_params and this will be yet another issue: "If the URI points to a tarball and the dockerfile parameter is also specified, there must be a file with the corresponding path inside the tarball" from https://docs.podman.io/en/latest/_static/api.html#tag/images/operation/ImageBuildLibpod
I understand this is going quite out of the "cache" issue scope, I apologize in advance.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added a unit test that checks the default parameters for images.build, with it also I corrected the mock URL for two other test cases

Copy link
Member

@Honny1 Honny1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, LGTM. Just one nonblocking nit.


def test_build_cache(self):
"""Check that building twice the same image uses caching"""
buffer = io.StringIO("""FROM quay.io/libpod/alpine_labels:latest\nLABEL test=value""")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can use FROM scratch so you don't rely on an external image. (This is nonblocking)

Suggested change
buffer = io.StringIO("""FROM quay.io/libpod/alpine_labels:latest\nLABEL test=value""")
buffer = io.StringIO("""FROM scratch\nLABEL test=value""")

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've followed your advice and also applied it in test_build

@Honny1
Copy link
Member

Honny1 commented Sep 24, 2025

/packit retest-failed

@inknos
Copy link
Contributor

inknos commented Sep 25, 2025

/packit build

This commit fixes issue containers#528 by adding a default value to
parameters layers and outputformat. This change aligns the
behavior with podman-remote.

Signed-off-by: Federico Rizzo <[email protected]>
Replace function "default" inside _render_params in images_build.py
with dict.get (kwargs.get) default value.

Signed-off-by: Federico Rizzo <[email protected]>
Avoid using external images in test_build and test_build_cache,
check both build caching both enabled and disabled.

Signed-off-by: Federico Rizzo <[email protected]>
@Honny1
Copy link
Member

Honny1 commented Nov 24, 2025

/packit retest-failed

@Honny1
Copy link
Member

Honny1 commented Nov 24, 2025

It seems like there is an issue with the tests. Any ideas? @inknos @fulminemizzega

=================================== FAILURES ===================================
____________________ ImagesIntegrationTest.test_image_crud _____________________

self = <podman.tests.integration.test_images.ImagesIntegrationTest testMethod=test_image_crud>

    def test_image_crud(self):
        """Test Image CRUD.
    
        Notes:
            Written to maximize reuse of pulled image.
        """
    
        with self.subTest("Pull Alpine Image"):
            image = self.client.images.pull("quay.io/libpod/alpine", tag="latest")
            self.assertIsInstance(image, Image)
            self.assertIn("quay.io/libpod/alpine:latest", image.tags)
            self.assertTrue(self.client.images.exists(image.id))
    
        with self.subTest("Inspect Alpine Image"):
            image = self.client.images.get("quay.io/libpod/alpine")
            self.assertIsInstance(image, Image)
            self.assertIn("quay.io/libpod/alpine:latest", image.tags)
    
        with self.subTest("Retrieve Image history"):
            ids = [i["Id"] for i in image.history()]
>           self.assertIn(image.id, ids)
E           AssertionError: '961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4' not found in ['f88416a00aabae4c9520ee61e1692b50faf551e0d08e2196b736a9e0e63ad060', '<missing>']

podman/tests/integration/test_images.py:69: AssertionError

@fulminemizzega
Copy link
Author

It seems like there is an issue with the tests. Any ideas? @inknos @fulminemizzega

For this sub case I still do not have any idea. I have had it working in 2 different systems, after resetting my podman environment (with podman system reset, since they've deprecated the boltDB db without any migration tool). Before resetting it I also had this test failing, and now I've just checked, it would fail again. I really have no idea why the libpod/alpine image's history does not contain the id of the image itself.
Just to be sure, I've run again podman system reset, and now after a pull I get:

# podman history libpod/alpine
ID            CREATED      CREATED BY                                     SIZE        COMMENT
961769676411  6 years ago  /bin/sh -c #(nop)  CMD ["/bin/sh"]             0B
<missing>     6 years ago  /bin/sh -c #(nop) ADD file:fe64057fbb83dcc...  5.84MB

which is ok.
On another system it is:

$ podman history libpod/alpine
ID            CREATED      CREATED BY                                     SIZE        COMMENT
d054a073bda4  6 years ago  /bin/sh -c #(nop)  CMD ["/bin/sh"]             0B
<missing>     6 years ago  /bin/sh -c #(nop) ADD file:fe64057fbb83dcc...  5.84MB

There is another failing test where I have a theory, and it is related to caching: subtest "Deleted unused Images" in test_images.py:test_image_crud (line 112) will fail because test_containers.py:test_container_commit leaves behind an image built from libpod/alpine. When the sub test tries to delete all the unused images it fails because the libpod/alpine image is in use by the new localhost/busybox.local:unittest.
I did not catch this because I do not run all the tests on my dev machine since it destroys all running containers... but the image history test case is a mistery.

@fulminemizzega
Copy link
Author

Disregard what I've written yesterday. The issue is related to caching and an intermediary image left over by test_containers.py:test_container_rm_anonymous_volume. This and some other steps performed in test_images.py are enough to explain everything, I think. I want to reproduce the steps with just podman and see if the results are the same.

Remove image left by integration test
test_containers.py:test_container_rm_anonymous_volume.
When caching is enabled, an intermediary image generated by the
build function call (corresponding to layer created by
"VOLUME myvol", see container_file string defined in the first
lines of the function test_container_rm_anonymous_volume) breaks
test_images.py:test_image_crud sub-test "Delete Image", where the
same base image (quay.io/libpod/alpine:latest) is supposed to be
removed, but is instead untagged, as it is in "use" by the other
layers. This also breaks sub-test "Delete unused Images" and on
on successive runs "Retrieve Image history".

Signed-off-by: Federico Rizzo <[email protected]>
@fulminemizzega
Copy link
Author

I have written most of the details in the last commit, I do not know if how I solved it is reasonable. I think that the cleanup should happen even if test_containers.py:test_container_rm_anonymous_volume fails, otherwise if this test fails leaving behind the built image, then the same tests in test_images.py will fail. On the other hand, the tested behavior (podman deleting anonymous volumes) is not really a concern of podman-py...

@Honny1
Copy link
Member

Honny1 commented Nov 27, 2025

/packit rebuild-failed

Copy link
Member

@Honny1 Honny1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the comprehensive investigation. LGTM

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 27, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: fulminemizzega, Honny1
Once this PR has been reviewed and has the lgtm label, please assign mheon for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@Honny1
Copy link
Member

Honny1 commented Nov 27, 2025

PTAL @inknos

@fulminemizzega
Copy link
Author

I've investigated a bit the pre-commit failing check, the tmt lint step reports an error. I can reproduce it using python 3.14 and tmt 1.39, it works fine with python 3.13. If it is not an issue, updating tmt to the latest version in .pre-commit-config.yaml (I tested 1.62.1 with python 3.14.0) fixes it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants