[18.0][IMP] queue_job: handle same uuid in duplicated db#852
[18.0][IMP] queue_job: handle same uuid in duplicated db#852hoangtrann wants to merge 1 commit intoOCA:18.0from
Conversation
|
Hi @guewen, |
69a3b8e to
76d29bd
Compare
| job.db_name, | ||
| db_name, | ||
| ) | ||
| self.remove_job(uuid) |
There was a problem hiding this comment.
I'm not quite sure this is a correct thing to do. We still end up with two jobs with the same uuid in the queue and that's going to be confusing, if only when reading the logs. I suspect it will also break things down the line.
If that case is important to you, perhaps you could have a little script that updates uuid and dbname in the jobs of the cloned database?
sbidoul
left a comment
There was a problem hiding this comment.
Another problem when duplicating a database is graph_uuid, which is also supposed to be unique.
So at this point I don't think this is an easy thing to support.
|
that's fair point, let me think about this more |
|
There hasn't been any activity on this pull request in the past 4 months, so it has been marked as stale and it will be closed automatically if no further activity occurs in the next 30 days. |
Because jobrunner can be initialized with instance having multiple databases, cloning a db with failed jobs would cause the jobrunner to failed and causing jobs hang.