Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement blend modes (#320) #791

Open
wants to merge 1 commit into
base: develop
Choose a base branch
from

Conversation

nilp0inter
Copy link

This pull request implement blend modes as per #320 .

I tested it manually and it seems to work OK. I'd like to add some automated tests, but I'd need some guidance to do it (I am not a C++ programmer).

Thanks for the great software!

@ferdnyc
Copy link
Contributor

ferdnyc commented Jan 11, 2022

@nilp0inter Thanks for submitting this! I think it's a great idea. Some initial thoughts, I'll follow up with inline comments afterwards, to address specifics.

One minor factor here is this note from the Qt 5.5 docs:

When the paint device is a QImage, the image format must be set to Format_ARGB32_Premultiplied or Format_ARGB32 for the composition modes to have any effect. For performance the premultiplied version is the preferred format.

Our images, notably, are not any type of Format_ARGB32, they're Format_RGBA8888_Premultiplied. Now, you say you tested this and it works (and I believe you), which is explained by the fact that in the Qt 5.15 docs, that paragraph changes to:

Several composition modes require an alpha channel in the source or target images to have an effect. For optimal performance the image format Format_ARGB32_Premultiplied is preferred.

So it appears that at some point between Qt 5.5 and 5.15, they removed that limitation. Whatever that cutoff point is, AIUI these modes won't work with any older Qt version.

That's not necessarily a dealbreaker — we can always just document the limitation, but it's something we should be aware of. And probably try to figure out where the version cutoff is.

I initially went into the archived docs to check whether QPainter composition mode enum values are defined the same in all Qt versions. It looks like they are, in Qt 6 and as far back in Qt 5 as matters to us. (Qt 5.5 is already older than we support.) But I'll write more about that in my review comments.

Copy link
Contributor

@ferdnyc ferdnyc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, I had some small suggestions... 😉

(Really, don't hate me.)

src/Clip.h Outdated
@@ -157,6 +157,7 @@ namespace openshot {
openshot::AnchorType anchor; ///< The anchor determines what parent a clip should snap to
openshot::FrameDisplayType display; ///< The format to display the frame number (if any)
openshot::VolumeMixType mixing; ///< What strategy should be followed when mixing audio with other clips
openshot::BlendType blend; ///< What strategy should be followed when mixing video with other clips
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should make this an openshot::Keyframe property. There are plenty of good reasons why someone might want to change the blend mode used on a particular Clip halfway through, and it would provide a much greater range of "special effects" type flexibility if they weren't locked into a single mode for the clip duration.

In practice it doesn't change a whole lot, making it a Keyframe property — it just means that it has to be queried on a frame-by-frame basis, you'd look up the value for the current frame with blend.GetValue(frame->number), instead of reading it directly.

Everything else involving the Keyframe processing / interface is already taken care of by the Keyframe class and the JSON metadata you're already setting.

Oh, the declaration should be moved down with the others around line 280, though. You can probably just plop it right in after alpha, they're pretty related.

src/Enums.h Outdated
Comment on lines 138 to 147
enum BlendType
{
BLEND_SOURCEOVER = QPainter::CompositionMode_SourceOver, ///< This is the default mode. The alpha of the current clip is used to blend the pixel on top of the lower layer.
BLEND_DESTINATIONOVER = QPainter::CompositionMode_DestinationOver, ///< The alpha of the lower layer is used to blend it on top of the current clip pixels. This mode is the inverse of BLEND_SOURCEOVER.
BLEND_CLEAR = QPainter::CompositionMode_Clear, ///< The pixels in the lower layer are cleared (set to fully transparent) independent of the current clip.
BLEND_SOURCE = QPainter::CompositionMode_Source, ///< The output is the current clip pixel. (This means a basic copy operation and is identical to SourceOver when the current clip pixel is opaque).
BLEND_DESTINATION = QPainter::CompositionMode_Destination, ///< The output is the lower layer pixel. This means that the blending has no effect. This mode is the inverse of BLEND_SOURCE.
BLEND_SOURCEIN = QPainter::CompositionMode_SourceIn, ///< The output is the current clip,where the alpha is reduced by that of the lower layer.
BLEND_DESTINATIONIN = QPainter::CompositionMode_DestinationIn, ///< The output is the lower layer,where the alpha is reduced by that of the current clip. This mode is the inverse of BLEND_SOURCEIN.
BLEND_SOURCEOUT = QPainter::CompositionMode_SourceOut, ///< The output is the current clip,where the alpha is reduced by the inverse of lower layer.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, so don't hate me... but I don't think we should piggyback directly off QPainter like this.

It's not JUST because I'd rather avoid including <QPainter> in one of our headers (though that's true, especially one that's loaded everywhere), it's also because tying our Blend mode values directly to their enum has a good chance of biting us later on. The FFmpeg code is a spaghetti mass of #ifdefs to try and work around version changes in the underlying API, and I'd rather avoid ending up with the same thing here. Using QPainter::CompositionMode_* directly means:

  1. If Qt ever changes their API, and either deprecates or alters these values, we have project files out "in the wild" that contain unsupported data which we might then have to special-case for.
  2. If we ever decide to handle blend-mode support differently, or if we switch away from Qt to something else entirely, then we have a bunch of project files out in the wild that contain data we don't have a definition for... or we're forced to continue importing QPainter just for its enum values even though we no longer use it.
  3. If we define our own stable set of values, we have more freedom to deviate from Qt's definition. Like, we may want to drop some of these — BLEND_CLEAR, at least, seems questionable — and our Keyframe implementation isn't really compatible with value ranges that have holes in them.

So, I'd suggest dropping the assignments and just let the compiler number the set, same as the other enums:

Suggested change
enum BlendType
{
BLEND_SOURCEOVER = QPainter::CompositionMode_SourceOver, ///< This is the default mode. The alpha of the current clip is used to blend the pixel on top of the lower layer.
BLEND_DESTINATIONOVER = QPainter::CompositionMode_DestinationOver, ///< The alpha of the lower layer is used to blend it on top of the current clip pixels. This mode is the inverse of BLEND_SOURCEOVER.
BLEND_CLEAR = QPainter::CompositionMode_Clear, ///< The pixels in the lower layer are cleared (set to fully transparent) independent of the current clip.
BLEND_SOURCE = QPainter::CompositionMode_Source, ///< The output is the current clip pixel. (This means a basic copy operation and is identical to SourceOver when the current clip pixel is opaque).
BLEND_DESTINATION = QPainter::CompositionMode_Destination, ///< The output is the lower layer pixel. This means that the blending has no effect. This mode is the inverse of BLEND_SOURCE.
BLEND_SOURCEIN = QPainter::CompositionMode_SourceIn, ///< The output is the current clip,where the alpha is reduced by that of the lower layer.
BLEND_DESTINATIONIN = QPainter::CompositionMode_DestinationIn, ///< The output is the lower layer,where the alpha is reduced by that of the current clip. This mode is the inverse of BLEND_SOURCEIN.
BLEND_SOURCEOUT = QPainter::CompositionMode_SourceOut, ///< The output is the current clip,where the alpha is reduced by the inverse of lower layer.
enum BlendType
{
BLEND_SOURCEOVER, ///< This is the default mode. The alpha of the current clip is used to blend the pixel on top of the lower layer.
BLEND_DESTINATIONOVER, ///< The alpha of the lower layer is used to blend it on top of the current clip pixels. This mode is the inverse of BLEND_SOURCEOVER.
BLEND_CLEAR, ///< The pixels in the lower layer are cleared (set to fully transparent) independent of the current clip.
BLEND_SOURCE, ///< The output is the current clip pixel. (This means a basic copy operation and is identical to BLEND_SOURCEOVER when the current clip pixel is opaque).
BLEND_DESTINATION, ///< The output is the lower layer pixel. This means that the blending has no effect. This mode is the inverse of BLEND_SOURCE.
BLEND_SOURCEIN, ///< The output is the current clip,where the alpha is reduced by that of the lower layer.
BLEND_DESTINATIONIN, ///< The output is the lower layer,where the alpha is reduced by that of the current clip. This mode is the inverse of BLEND_SOURCEIN.
BLEND_SOURCEOUT, ///< The output is the current clip,where the alpha is reduced by the inverse of lower layer.

...and so on for the rest of the enum list.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(I say "don't hate me" because this clearly required a lot of tedious effort. And here I come, cavalierly asking, "It's great, can you just change everything?" So... yeah, sorry about that.)

src/Enums.h Outdated
Comment on lines 148 to 150
BLEND_DESTINATIONOUT = QPainter::CompositionMode_DestinationOut, ///< The output is the lower layer,where the alpha is reduced by the inverse of the current clip. This mode is the inverse of BLEND_SOURCEOUT.
BLEND_SOURCEATOP = QPainter::CompositionMode_SourceAtop, ///< The current clip pixel is blended on top of the lower layer,with the alpha of the current clip pixel reduced by the alpha of the lower layer pixel.
BLEND_DESTINATIONATOP = QPainter::CompositionMode_DestinationAtop, ///< The lower layer pixel is blended on top of the current clip,with the alpha of the lower layer pixel is reduced by the alpha of the lower layer pixel. This mode is the inverse of BLEND_SOURCEATOP.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the types that Qt had in CamelCase, because our enum labels are all-caps I think it's better to insert underscores between words, for readability. IOW, BLEND_DESTINATION_ATOP, BLEND_SOURCE_OUT, etc.

src/Enums.h Outdated
@@ -13,6 +13,7 @@
#ifndef OPENSHOT_ENUMS_H
#define OPENSHOT_ENUMS_H

#include <QPainter>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This won't be necessary here, if we let the enum assign new values. (The mapping to QPainter::CompositionMode_* will happen in Clip.cpp, where <QPainter> is already included because we're using it there.)

src/Clip.cpp Outdated
@@ -907,6 +935,7 @@ Json::Value Clip::JsonValue() const {
root["anchor"] = anchor;
root["display"] = display;
root["mixing"] = mixing;
root["blend"] = blend;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So with a Keyframe property, like the others you'd want to set this to the JSON representation of its current data:

Suggested change
root["blend"] = blend;
root["blend"] = blend.JsonValue();

src/Clip.cpp Outdated
Comment on lines 1027 to 1039
if (!root["blend"].isNull())
blend = (BlendType) root["blend"].asInt();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And this becomes...

Suggested change
if (!root["blend"].isNull())
blend = (BlendType) root["blend"].asInt();
if (!root["blend"].isNull())
blend.setJsonValue(root["blend"]);

src/Clip.cpp Outdated
@@ -761,6 +762,7 @@ std::string Clip::PropertiesJSON(int64_t requested_frame) const {
root["scale"] = add_property_json("Scale", scale, "int", "", NULL, 0, 3, false, requested_frame);
root["display"] = add_property_json("Frame Number", display, "int", "", NULL, 0, 3, false, requested_frame);
root["mixing"] = add_property_json("Volume Mixing", mixing, "int", "", NULL, 0, 2, false, requested_frame);
root["blend"] = add_property_json("Blend Mode", blend, "int", "", NULL, 0, 23, false, requested_frame);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would move down into the Keyframe property section and become...

Suggested change
root["blend"] = add_property_json("Blend Mode", blend, "int", "", NULL, 0, 23, false, requested_frame);
root["blend"] = add_property_json("Blend Mode", blend.GetValue(requested_frame), "int", "", &blend, 0, 23, false, requested_frame);

src/Clip.cpp Outdated
Comment on lines 800 to 818
// Add video blend choices (dropdown style)
root["blend"]["choices"].append(add_property_choice_json("Source Over", BLEND_SOURCEOVER, blend));
root["blend"]["choices"].append(add_property_choice_json("Destination Over", BLEND_DESTINATIONOVER, blend));
root["blend"]["choices"].append(add_property_choice_json("Clear", BLEND_CLEAR, blend));
root["blend"]["choices"].append(add_property_choice_json("Source", BLEND_SOURCE, blend));
root["blend"]["choices"].append(add_property_choice_json("Destination", BLEND_DESTINATION, blend));
root["blend"]["choices"].append(add_property_choice_json("Source In", BLEND_SOURCEIN, blend));
root["blend"]["choices"].append(add_property_choice_json("Destination In", BLEND_DESTINATIONIN, blend));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And then this whole section moves down below that (so that root["blend"] is already defined before we start adding choices). But other than moving it the only change needed is that all of the blend third arguments would become blend.GetValue(requested_frame) instead.

src/Clip.cpp Outdated
Comment on lines 1272 to 1320
painter.setCompositionMode(QPainter::CompositionMode_SourceOver);
painter.setCompositionMode((QPainter::CompositionMode) blend);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, here's where you'd need to make the last change. Instead of passing the value of blend to the QPainter instance directly, you'll need to map it from our enum to theirs.

Probably easiest to write a small utility method for this, but ultimately up to you. But assuming you do write one, let's call it

QPainter::CompositionMode Clip::blendToQPainter(openshot::BlendMode blend) {
    // ...
}

There's a bunch of different ways you can do the mapping (including building a std::map<openshot::BlendMode, QPainter::CompositionMode> to index the Qt modes by our enum), but there are some advantages to sticking with the basics.

If you use a basic switch(blend) to drive the conversion, and you don't give it a default: branch, then the compiler will (with the right settings) emit a compile-time warning if any of the BlendMode enum values are missed by the switch statement. (So, you should include a case for BLEND_SOURCE_OVER explicitly, even though it's the default. Just to silence those warnings.)

The compiler's monitoring of switch usage becomes especially handy further down the road, when someone decides to change or add elements to the enum, and forgets to update the code that uses it.

But basically that's it, just fill it with a bunch of cases like

case BLEND_DESTINATION_ATOP:
    painter_mode = QPainter::CompositionMode_DestinationAtop;
    break;
case BLEND_HARD_LIGHT:
    painter_mode = QPainter::CompositionMode_HardLight;
    break;

Lather rinse, repeat, and finally return painter_mode;. The return value of blendToQPainter() you could even pass directly to painter.setCompositionMode() — you don't really need it for anything else. But that'a entirely up to you.

@ferdnyc
Copy link
Contributor

ferdnyc commented Jan 12, 2022

@nilp0inter

Re: tests... tests would be good, for sure, if you're willing to work on them. They certainly don't have to be exhaustive, but at least a couple that confirm the basic modes are working as expected would be good for peace of mind.

(This PR may also crater coverage without it... though, I don't actually think so, since all of the enum values are accessed by add_property_choice_json() calls. The only two lines of source you add that might not be covered by existing tests (and even they might be) are these two:

libopenshot/src/Clip.cpp

Lines 1027 to 1028 in 0d1ae39

if (!root["blend"].isNull())
blend = (BlendType) root["blend"].asInt();

Anyway, as far as writing tests, the existing tests/Clip.cpp and tests/ChromaKey.cpp can be used as templates. Our unit test framework is Catch2, which is really simple to use. My 'primer' on it (which was targeted towards existing developers switching from our previous framework, but also works well enough as a general intro) is in the repo wiki.

My knee-jerk thought is that the tests should go into tests/Clip.cpp, since they're testing a new Clip property. However, that property is only testable in the context of a Timeline object with multiple Clips, and in order to test it you'd need write tests that look more like the ones in tests/Timeline.cpp. (The argument could even be made that they should actually be Timeline tests, since the property is essentially meaningless for a single Clip object. So I can see the logic in that position, too. I wouldn't object to either test placement.)

Wherever they get put, in spirit the tests would be a hybrid of the existing Timeline and ChromaKey tests.

Actually, for starters you'd want to test any conversion method(s) you add to the Clip class, since it's easy to whip one of those off. If there are any issues with the conversion, they'll be the source of hard-to-find bugs later on. (And there you have it, the entire argument for unit-testing condensed into an easy-to-swallow pill form.)

For the blend tests, you'd start off like the tests/Timeline.cpp cases "two-track video" or "Effect order":

  1. Construct a Timeline object

  2. Put some Clips on it, with overlapping Positions but on different Layers (tracks)

  3. Set/adjust the Clip.blend member for at least one Clip.

    Clip properties are
    stored as openshot::Keyframe objects, you add fix the value at certain
    points on the Timeline by inserting one or more openshot::Point(x, y)
    values that corresponding to (frame number, property value). The value
    will be interpolated between those points to form a curve, which can be
    adjusted using additional properties on the Keyframe's fixed Points.
    Only a single, simple numeric value is supported for each property,
    represented by the y axis values of the Keyframe curve. (Color properties
    are a special case consisting of three separate Keyframe objects, one
    for each color channel; if alpha is supported it's a separate property.)

    (In fact, the Keyframe class — via Point, via Coordinate, which is the
    basic simple (x, y) pair without additional parameters — can only store
    floats. The JSON description for each property allows further
    constraints, like the "source" (or destination) datatype for that
    property's values and the min/max range of values accepted.)

  4. Call Timeline::GetFrame(number) to produce composited frame(s)

  5. Confirm that the results are as expected

I would strongly suggest constructing test data more like the ChromaKey tests, though:

  • Rather than importing "live" test data, use either Frames filled with solid color,
    or Frames containing simple drawings. (Like a 640×360 red square centered
    in a transparent 1280×720 frame, that kind of thing.) It'll make the
    expected results of the blends predictable and simple to determine.

  • Testing Clip compositing on the Timeline is a little trickier than Effect
    testing. You can just feed Frames directly to an Effect to modify, but
    that won't work on the Timeline. But you can fake it with the DummyReader.

    See tests/DummyReader.cpp for examples, but basically if you create a
    CacheMemory object, shove Frames into it, and hand it to DummyReader,
    it will return those Frames when its GetFrame() is called.

    So if you pass that DummyReader instance to the Clip() constructor,
    it'll set the output frames for the Clip. It should be safe to use
    the same DummyReader with multiple Clips (if not that's a bug to report),
    which would provide a good, easy way to test the basic Blend features:
    Create a few Clips with the same Frames, set different blend parameters,
    and confirm that has the expected effect on the images produced when you
    call Timeline::GetFrame().

    You unfortunately can't use the QtImageReader to supply still images for
    Clips, because I'm now noticing that the only option for constructing one
    is to give it the path to a disk file as a string. There's no constructor
    that takes a QImage, even though the first thing it does is load the
    supplied path into a QImage which it stores and uses to construct its
    output Frames.

    IOW, not having a QtImageReader(QImage) constructor is dumb,
    and we should fix that.

  • For confirming the results, definitely prefer the method used in ChromaKey
    tests: Use Frame::GetImage() to retrieve the output QImage, call its
    QImage::pixelColor(x, y) method to examine individual pixels, and
    compare the value returned to a QColor() of the expected value.

    The alternatives are either direct byte-array spelunking (testing each
    individual channel in turn), or CheckPixel() which is... bad.

    There's a stream output operator definition at the top of
    tests/ChromaKey.cpp that lets Catch2 display QColor values, which
    becomes useful when tests fail. With it you get messages like:

    tests/ChromaKey.cpp:69: FAILED:
      CHECK( pix_e == expected )
    with expansion:
      QColor(0, 204, 0, 255)
      ==
      QColor(0, 192, 0, 255)
    

    Without it, that's:

    tests/ChromaKey.cpp:69: FAILED:
      CHECK( pix_e == expected )
    with expansion:
      {?} == {?}
    

    (With CheckPixel it's almost as unhelpful:)

    tests/FFmpegReader.cpp:95: FAILED:
      CHECK( f->CheckPixel(10, 112, 21, 191, 84, 255, 5) == true )
    with expansion:
      false == true
    

    You could just copy-paste the code to where you need it, or (preferably)
    we can move it into a header file. Includes living in the tests directory
    might require a slight include-path adjustment in CMake, but I can take
    care of that.

@ferdnyc
Copy link
Contributor

ferdnyc commented Jan 12, 2022

I have no idea why the CI jobs are all failing, but it's some sort of Github Actions issue. They're not even getting far enough into the process to load the repo code, never mind compile it. I'm sure it'll clear up shortly.

@ferdnyc
Copy link
Contributor

ferdnyc commented Jan 12, 2022

I have no idea why the CI jobs are all failing

We seem to be back in business.

@codecov
Copy link

codecov bot commented Jan 12, 2022

Codecov Report

Merging #791 (30053c9) into develop (cfca6e7) will increase coverage by 0.12%.
The diff coverage is 87.87%.

❗ Current head 30053c9 differs from pull request most recent head 56bd955. Consider uploading reports for the commit 56bd955 to get more accurate results

@@             Coverage Diff             @@
##           develop     #791      +/-   ##
===========================================
+ Coverage    48.92%   49.04%   +0.12%     
===========================================
  Files          184      184              
  Lines        15815    15733      -82     
===========================================
- Hits          7738     7717      -21     
+ Misses        8077     8016      -61     
Impacted Files Coverage Δ
src/Clip.h 62.50% <ø> (ø)
src/FFmpegWriter.cpp 60.69% <50.00%> (-0.71%) ⬇️
src/Clip.cpp 45.55% <90.00%> (+1.27%) ⬆️
src/Timeline.cpp 42.47% <100.00%> (-0.18%) ⬇️
src/FFmpegReader.cpp 68.17% <0.00%> (-5.60%) ⬇️
src/AudioReaderSource.cpp 0.86% <0.00%> (-1.36%) ⬇️
src/MagickUtilities.cpp 95.65% <0.00%> (-0.19%) ⬇️
... and 30 more

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@jeffski
Copy link
Contributor

jeffski commented Jan 12, 2022

Thanks @nilp0inter for working on this. Really look forward to seeing this added as a new feature.

@nilp0inter
Copy link
Author

Thank you @ferdnyc for the thorough review, it is really appreciated. I'll be back working on this over the weekend.

@ferdnyc
Copy link
Contributor

ferdnyc commented Jan 14, 2022

@nilp0inter Now that I've finally fixed all of the dumb typos and Qt versioning vagaries that were preventing it from passing CI, my unit-test-tools branch is up at PR #795. So if you do find time to work on unit tests, it takes care of both of the things I wrote about the other day, regarding unit testing:

  1. The QColor() stream output operator is moved into a header file, so that any test class can #include it. (It also adds a QSize operator to go with the QColor() one, not that you're likely to need it.) Nothing more is required, other than #include "test_utils.h", for Catch2 to display any QColor() and QSize() values used in comparisons when a test fails.

  2. There's now a QtImageReader(const QImage&) overload to the QtImageReader constructor, which provides a much simpler means of creating Clip objects that generate Frames which have a particular, predefined QImage as their video component. Should make testing of compositing a lot simpler and more predictable.

(One caveat: When I say "Frames which have a particular, predefined QImage as their video component", that's subject to the image scaling that gets applied throughout libopenshot — if you want the Frames that come out of Timeline::GetFrame() to contain the same image you passed in to QtImageReader(), make sure to create that Timeline with the same dimensions your image has. Otherwise you'll discover that things are getting resized on you.)

@jeffski
Copy link
Contributor

jeffski commented Sep 4, 2022

Any update on this one? Would be a great feature!

@nilp0inter
Copy link
Author

@jeffski , thank you for the remainder. I absolutely forgot about this. I am updating my branch and working on the proposed changes right now.

@nilp0inter
Copy link
Author

@ferdnyc , I've changed the code according to your comments (thank you very much for the detailed explanation, btw) and tested it manually using openshot-qt. Everything appears to be working.

Regarding the tests, I'd like to give it a try, and will do in the following days.

Please, comment if you see something wrong with the actual implementation.

@github-actions
Copy link

Merge conflicts have been detected on this PR, please resolve.

@github-actions github-actions bot added the conflicts A PR with unresolved merge conflicts label Mar 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
conflicts A PR with unresolved merge conflicts
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants