Welcome, DanielWolf! ๐Ÿ™‚

You're right that generating an entire skeleton would make it difficult to get the animation into an existing skeleton. It could be possible using Import Project, which allows importing an animation from another project file. Rhubarb would write a skeleton JSON file, then you'd need to Import Data to get it into a project file, then you could Import Project to merge in an animation. Note this requires both skeletons to have the same objects that were keyed (slots, attachments, events, etc).

Your proposal would work fine and is simpler than the above juggling. Users would export their project to JSON, add animations to it with Rhubarb, then they can use the data as is or import it back into a Spine project for further editing. I don't think there is a better approach.

Related Discussions

Thanks for your detailed explanation. That's exactly what I was assuming, so I'll implement it that way.

I'm not sure how soon I will find the time to implement the exporter. As soon as there are any news, I'll update this topic.

a month later

So, I try to generate a animation programmatically from the papagayo output as it can be seen above. But no success, I get no error messages but no attachments seems to get set on slot "Mouth". Any suggestions? Here is my code:

    public TextAsset speechTxt;
    private int speechTxtFrameRate = 24;

void Talk() {
     var lines = speechTxt.text.Split("\n"[0]);     
AttachmentTimeline timeLine = new AttachmentTimeline (lines.Length-1); timeLine.slotIndex = skeletonAnimation.skeleton.data.FindSlotIndex("Mouth"); for (int frameIndex = 1; frameIndex < lines.Length-1; frameIndex++) { var frameData = lines[frameIndex].Split(" "[0]); timeLine.SetFrame (frameIndex - 1, float.Parse (frameData [0]) / speechTxtFrameRate, "Mouths/"+frameData[1]); } ExposedList<Timeline> timelines = new ExposedList<Timeline>(); timelines.Add(timeLine); Spine.Animation anim = new Spine.Animation("talk", timelines, Speech.Length); skeletonAnimation.skeleton.data.animations.Add(anim); skeletonAnimation.state.SetAnimation (1, "talk", false); }

It looks OK. You'll have to debug it further. Does the AttachmentTimeline try to set the attachment when it should? If so, when it sets the attachment is it able to find the attachment with the Mouths/xxx name?

Thank you for looking!
I have been tearing my hair, but I just figured out that it was my line endings in the text file that was my problem :bang: ! when I converted them everything worked :rofl: !

5 months later
DanielWolf wrote

Thanks for your detailed explanation. That's exactly what I was assuming, so I'll implement it that way.

I'm not sure how soon I will find the time to implement the exporter. As soon as there are any news, I'll update this topic.

Hi. do you still plan to implement the exporter?

do you still plan to implement the exporter?

Absolutely. I've been postponing this feature for some time, mainly because I heard that Esoteric Software was planning to add audio support to Spine. I felt that it made sense to wait for that, since it makes things easier for me.

Now that Spine 3.7 beta is out, I'll soon start implementing Spine support in Rhubarb Lip Sync.

DanielWolf wrote

I heard that Esoteric Software was planning to add audio support to Spine.

And now we have it!๐Ÿ˜ƒ
Thanks for doing this, I myself would love to use it in some projects!

Erikari wrote

I myself would love to use it in some projects!

That's great to hear! ๐Ÿ™‚ And it definitely makes more sense now that Spine has audio support.

By the way: Integrating third-party tools such as Rhubarb with Spine would be much easier if Spine had a plugin interface. I recently integrated Rhubarb with Adobe After Effects, and the UX is much better than what I'll be able to provide for Spine. Is there a chance to see a plugin/scripting system for Spine any time soon?

4 days later

Rhubarb looks great! I would love to see integration of this tool in one way or another.

23 days later
16 days later

I've been busy with other stuff, but now I'm ready to add Spine support to Rhubarb Lip Sync! ๐Ÿ™‚

Now that Spine has audio support, I'm thinking of the following rough workflow:

  1. The user creates a Spine project with a skeleton. They give the skeleton a slot for the mouth. This slot contains multiple image attachments for the various mouth shapes, named using a fixed schema.
  2. The user has one or more audio file to lip-sync. For each audio file, they create an event.
  3. This step is optional: Rhubarb Lip Sync can create good mouth animation from audio alone. However, the results are even better if Rhubarb is told the dialog text for each sound file. If the user wants to, they may set the event's default string value to the dialog text.
  4. The user exports the skeleton in JSON format.
  5. The user starts "Rhubarb Lip Sync for Spine", which will be a cross-platform GUI application. There, they open the exported JSON file.
  6. The user now has the following controls:
  7. a dropdown to select the slot that represents the mouth
  8. a list of all events with associated audio file. Each event has a checkbox that can be checked.
  9. At the click of a button, automatic mouth animation is performed for all checked events:
  10. For each checked event, a new animation is created that's named after the event, with an added prefix like "say_". So if the user created the event 'hi_there' based on the file 'hi_there.wav', there will now be the animation 'say_hi_there'.
  11. Each generated animation contains the matching audio event at frame 0, plus the actual mouth animation.
  12. The original JSON file is overwritten to contain the new animations.
  13. The user imports the JSON file back into Spine to get the new animations.

I don't expect any technical problems, but I'm concerned about the user experience. My experience with Spine is very limited, so I'd like to hear from some power users:

Does this workflow make sense to you?
Is there aspect that may be inconvenient or unnecessarily complicated?

Any feedback is welcome!

Sounds like a good plan!

a month later

Quick update: I'm working on the lip sync tool right now. I don't have much spare time, so things are going slow, but I should definitely have something to show within the next few weeks. ๐Ÿ™‚

21 days later

Rhubarb Lip Sync for Spine is making progress! I just performed the first successful round-trip, importing a Spine file into Rhubarb, animating it, then importing it back into Spine. :happy:

There's still a lot of work to be done though.

Image removed due to the lack of support for HTTPS. | Show Anyway

Super cool! :happy:

I'm so looking forward for this! ๐Ÿ˜ƒ happy to hear news!

BTW, we added a new feature you might find interesting, discussed in this thread:
"morph-target" track animation mix mode
It is a new kind of AnimationState mixing (additive) which enables mixing multiple mesh deforms. This could be used for facial expressions, eg to mix 40% angry with 20% surprised and 35% happy. Here's a GIF mixing a breathe animation with up/down and left/right animations:

Image removed due to the lack of support for HTTPS. | Show Anyway


Notice he looks both left and down, and the alpha can be adjusted for each animation.

a month later