• Showcase
  • Spine Pro Vtuber Working Prototype

Did you update your Spine file from the last time that I downloaded it?

No, I did not add any change to my Spine file.

I have tried several times and it appears to turn black depending on the timing. Here is the video:

I simply reloaded the browser and tried the same steps, but it worked and didn't work. You can see the bug at the following times in the video:
0:07
1:07
1:19
2:12

Related Discussions
...

Another cause could be Spine runtime texture loading. If the texture is not loaded by the time Spine runtime executes rendering, then the texture is black by default.

I believe this is happening to your situation after your further testing The reason that the texture is not loading correctly and inconsistently maybe because you have the web developer console open. I know for a fact that the console can cause delay in script execution. The console being open does lag inconsistently and could explain why the texture loads correctly sometimes.

When I first read your reply, I thought I got it, but I was able to reproduce the problem when I tried it without opening the developer tools. In most cases, just reloading the page and reuploading the files can solve the problem, so I'm still not sure what the cause is. Maybe after a while other people who have encountered this problem will have a clue of the cause.
There is a solution and it is not a very serious bug, so I think you may ignore it for the time being.

Update

1.0.9

  • Add premultiplied alpha setting to Single Value Properties. Takes the value of "false" or "true". Each model can be rendered either premultiplied alpha or straight alpha.
  • Exand *.svp file to save premultiplied alpha setting.
  • Add the ability to load a video for face tracking. You can find it in the Video Player Settings menu. Video player accepts *.mp4, *.webm, and *.ogg as long as your web browser support those formats and corresponding video codecs.
  • Allow the required animation in Spine to be placed in a folder. All the required animations need to be in the same folder. You cannot put the required animations in multiple different folders.

https://silverstraw.itch.io/spine-vtuber-prototype

@SilverStraw I just tried the video player feature in 1.0.9 and it's GREAT!!! Thanks to the ability to stop the video midway through, it is very easy to find the points I need to fix, which is very helpful to improve my model.

@OatmealMu The mouth rig of your model is very cool! 😃

7 days later

I just tried the video player feature in 1.0.9 and it's GREAT!!! Thanks to the ability to stop the video midway through, it is very easy to find the points I need to fix, which is very helpful to improve my model.

Sorry for the late reply. I have been working with a troublesome experimental feature. I am glad that the video player is helpful to everyone. I hope you can review the next experimental feature because it looks promising. Erika and Cydomin requested the experimental feature.

Update

1.1.0

  • Override spine.SkeletonData.findAnimation method to take in regular expression string parameter. It will allow regular expression pattern use for finding animation names.
  • Add an experimental menu for unfinished features and allow others to give feedback during development
  • Add "Motion Capture Snapshot" to experimental menu to test saving motion capture pose as a animation in JSON. The JSON then could be re-imported as data into Spine editor. Only works with Spine JSON file imported into Spine Vtuber Prototype. Spine skel binary will not work because the Spine Vtuber Prototype needs to duplicate the Spine JSON. Bone and draw order keyframes are currently operational.This is a requested unfinished feature.
  • Add a slider (range) to control the video player opacity in the video player settings menu. At full transparency, the video player can be controlled by the right-mouse context menu.

https://silverstraw.itch.io/spine-vtuber-prototype

Add an experimental menu for unfinished features and allow others to give feedback during development
Add "Motion Capture Snapshot" to experimental menu to test saving motion capture pose as a animation in JSON. The JSON then could be re-imported as data into Spine editor. Only works with Spine JSON file imported into Spine Vtuber Prototype. Spine skel binary will not work because the Spine Vtuber Prototype needs to duplicate the Spine JSON. Bone and draw order keyframes are currently operational.This is a requested unfinished feature.

I'm really excited about this new feature's potential! :heart:
As far as I've tried, what I can obtain seems only one frame so far, is this the right behavior? The pose on the frame did indeed match the pose of the model at the time the download was performed, so if the current behavior is as intended, then it is working correctly.

Add a slider (range) to control the video player opacity in the video player settings menu. At full transparency, the video player can be controlled by the right-mouse context menu.

This is really helpful! It would be useful for taking screenshots or screen recording.

This is really helpful! It would be useful for taking screenshots or screen recording.

Yes, for those who do not want to show their faces. 🙂

As far as I've tried, what I can obtain seems only one frame so far, is this the right behavior? The pose on the frame did indeed match the pose of the model at the time the download was performed, so if the current behavior is as intended, then it is working correctly.

Currently there is only one frame. There were several changes to the Spine JSON from version 3 to version 4. I did not add functions for all the keyframe types yet. I am still testing a single keyframe. Once that is done, I will move onto saving multiple keyframes. >😃

Currently there is only one frame. There were several changes to the Spine JSON from version 3 to version 4. I did not add functions for all the keyframe types yet. I am still testing a single keyframe. Once that is done, I will move onto saving multiple keyframes. >😃

Great to hear that!! It would be a very beneficial update to be able to save multiple frames of the captured motion, so that people who want to use Spine skeletons to make a music video can use this tool to make a lip sync and tracking the face movement. (I was just recently asked about lip-sync by a Japanese user, so I believe there is a real demand for it.)
I'm really looking forward to the next update very much :bananatime:

6 days later

Update 1.1.1

• Add "Start Record" button to Experimental menu. This button will start motion capture the model(s) poses. User will need on enter the amount of delay between each capture (milliseconds).
• Add slowly flashing "Recording" label in the Experimental menu. This will signal when the application is motion capturing.
• Add "Stop Record" button to Experimental menu. This button will stop motion capturing.
• Add "Play Keyframes" button to Experimental menu. This button will play back the motion capture without any in-betweening.
• Add "Save Motion Capture" button to Experimental menu. This button saves the current motion capture. The user will be asked to enter a name for the motion capture animation and the amount of delay between each keyframe (milliseconds). The user will also be asked to save a JSON file that can be re-imported back into the Spine editor. User can save multiple motion capture animations in a JSON file by choosing "Cancel" in the save dialog window after each motion capture session. Then save the JSON file at the end when user is ready to re-import back to Spine editor.

https://silverstraw.itch.io/spine-vtuber-prototype

1.1.1 is Super COOL!! 😃 I obtained the following animation in about 10 minutes using the motion capture of 1.1.1:

Supplement for people who have not yet tried this feature: The animation in the video above is not the state immediately after importing the JSON obtained using the motion capture feature, but added a little editing (remove unnecessary keyframes, match the first and last keyframe, etc).

I set the amount of time between each keyframe, which can be defined when starting a recording as 50 (milliseconds). I have also tried 250 (default), 100, but 50 seemed about right to achieve the smoothness I was looking for.
@SilverStraw I was afraid that changing the time setting would cause some kind of error or freeze the recording, but it didn't at all, and I got this result without any errors. I have tried recording for a longer time and had no problems even then, and I was amazed at how fast the captured data JSON file was saved. This is really fantastic! :heart:

By the way, I have a question. When I record with 50 milliseconds for the amount of time between each keyframe, the captured animation becomes smooth, but I felt that the motion is slower than when it was captured. Does it affect the FPS of the result?

By the way, I have a question. When I record with 50 milliseconds for the amount of time between each keyframe, the captured animation becomes smooth, but I felt that the motion is slower than when it was captured. Does it affect the FPS of the result?

I did not understand your question.

What I wanted to say is, if the shorter the time between each keyframe, the more the FPS in a saved animation, or not. For example, if it were 250 milliseconds, the animation would be recorded in 30 FPS, but if it were 50, it would be recorded in 150 FPS. I would like to save an animation that can be played at the same speed as when captured, but the actual saved animation seems to be moving slower, and I would like to know how to play it back at the correct playback speed.

Do you mean the playback animation in Spine Vtuber Prototype or in Spine editor?
The "Start Record" and "Save Motion Capture" buttons ask for time delay. You do not necessarily need to input the same value for each of these button. When using the "Save Motion Capture" button, you can have a shorter delay than when you motion capture. I hope this helps.

I meant in the Spine editor, sorry for being unclear!

The "Start Record" and "Save Motion Capture" buttons ask for time delay. You do not necessarily need to input the same value for each of these button. When using the "Save Motion Capture" button, you can have a shorter delay than when you motion capture. I hope this helps.

I see, so I can make adjustments by that. Thank you for your response!

This may be an unpopular idea/opinion, but instead of face-tracking could the character follow the mouse instead or both in different modes. And Maybe when you press/hold a certain button the expression changes (I imagine in spine having Animation types like idle Happy, Sad, Angry, exc.) or also even changing clothing (which in spine would be skins) Any way just a thought I had


Here's a program with that example (except of course it's still images and nothing to do with spine animations)
https://dreamtoaster.itch.io/honk

13 days later

This may be an unpopular idea/opinion, but instead of face-tracking could the character follow the mouse instead or both in different modes.

Mouse movements cannot be tracked when the web browser is out of focus.

And Maybe when you press/hold a certain button the expression changes (I imagine in spine having Animation types like idle Happy, Sad, Angry, exc.) or also even changing clothing (which in spine would be skins)

There are not built-in animation types for facial expressions in Spine editor. There are plans to add more animation tracks for customizable override of lower tracks.
There is support for skins but not runtime mix&match skins.

Here's a program with that example (except of course it's still images and nothing to do with spine animations)
https://dreamtoaster.itch.io/honk

Have you purchased that program?


Work in Progress: Integrating face expression AI into the project. It is going to be another experimental.

Work in Progress: Integrating face expression AI into the project. It is going to be another experimental.

Sounds very interesting! I’m rooting for you 🙂

Update 1.1.2

  • Allow model to be loaded into default pose even though the model lacks the required animations.
  • Add face expression AI to test into Experimental menu. "Face Expression AI Active" checkbox starts and stops the facial expression recognition. The camera has to be active for the AI. This experiment outputs the recognized facial expression state and list all the possible facial expressions and their respective confidence level.
  • Add "Change Facial Expression Threshold" button to Experimental menu to adjust the confidence threshold for each possible facial expression. Neutral expression will be ignored in the future releases.

https://silverstraw.itch.io/spine-vtuber-prototype

The face expression AI is tricky to use. Some facial expressions are easier to be recognized than other expressions.
By default, the AI wants to choose the facial expression with the highest confidence level.
I added a way for the user to adjust the confidence threshold for each possible facial expression to add another criteria for determining the face expression state.
This AI experiment will not affect any rendered model so you can test this experiment without loading any Spine files. The camera has to be active for the AI to get image data.
Determining the facial expression that you want is important step before the project starts messing with your vtuber models.