Skip to content

fix : start and stop commands crash when container state diverges from local config #306

@DivyanshuVortex

Description

@DivyanshuVortex

Reason/Context

The CLI currently crashes if you manually delete a container (using docker rm or prune) because it only looks at its own config file, not the actual Docker/Podman state. This fix makes the CLI smarter by checking the real state of your containers before acting.

This prevents users from getting "stuck" in a state where the CLI crashes and won't let you start or stop until you manually delete your config file.

Reproduction:

microcks start
docker rm -f microcks
microcks start # False Output: "Microcks instance is already running"

Description

The CLI needs to ask the Docker daemon for the real container state before acting on the state stored in the config file.
Changes introduced:

  1. Check the Docker daemon for container existence before attempting to create or stop it.
  2. Update UpsertInstance in localconfig.go to match by Name instead of ContainerID. Currently, every time a container is recreated (getting a new ID), it appends a duplicate ghost entry to the config file instead of updating the existing one.

Implementation ideas

Add a GetContainer(name string) method to the ContainerClient interface to inspect real container state

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions