• Showcase
  • Trying integration of Spine + Sprite Lamp

Another video with just little improvements and pants!, no big changes, this time in HD:
http://www.youtube.com/watch?v=vKby1IBrfCM

About mobile:

I didn't tried it in mobile but i think what we are seeing in this demo, even with more characters and scenario elements will work without problems in any "60fpsEpicCitadel" device. The impact can be the number of lights you use.

Moving forward:

I need to implement a material and lighting systems for my 2D engine in order to do something more than this demo. I haven't implemented the last Spine features yet like ffd, but I hope to do it at some point, I'm using my own C++ runtime. For sure ffd is 100% compatible with normal mapping, no issue here.

I think that the way to go for Desktop and modern devices (OpenGL ES 3.0) is using Deferred Lighting (with MRT), this way you avoid the performance impact that can appear having a lot of sprites and lights, making a single full screen sprite with all the transformed normals information, plus this is how you can achieve the self-shadow feature described in Sprite Lamp Blog, cause in Spine you use separate sprites, so you need to render those sprites normals to a single texture in order to share the required information to do the self-shadow cast.

Related Discussions
...
5 days later

An advice for Unity users, I has been doing tests with Unity3D (the previous tests were done with in-house c++ engine), and I detected that the current Spine runtime for Unity3d sends all the skeleton slots as a single mesh, this means that all slots share the same Mesh Renderer component, this also means that all slots receive the same _World2Object matrix, witch only contains the skeleton transformation, not the slot transformation itself, so, the normals from the normal map can not be transformed to the slot space, making impossible to achieve the correct illumination effect.

Is anyone working on a different runtime or have plans to create a runtime where each slot use it's own mesh renderer component?

You probably want to use a shader to rotate the normals for each texture region drawn.

A runtime that uses a gameobject per attachment could be useful. I just worry about performance.

But normal maps work fine with typical 3D meshes, right? Apart from the vertex normals potentially being flipped with 2D meshes, why wouldn't it work for Spine's meshes? There must be a way to push the necessary data for the shader without resorting to 1 GameObject: 1 Attachment

Similar topic, but in the case of Unity3D it's more about the runtime than a shader problem, how to get the algorithm to get the correct normal rotation is not an issue in the Unity3D shader I'm using, the problem is that this shader recives wrong information from the runtime cause only the skeleton transformation is passed as uniform to the shader for all slots.

The GameObject per Slot solution is because I want to "follow the rules" of Unity3D components, I mean, this is the standard Unity3D way of doing those things, we can of course be more imaginative and pass this information in a similar way we do in a regular 3D skeleton with bones and vertex, in this case: indexing the texture region transformations, assigning those indexes to each vertex and exposing the transformations and indexes to the shader.

But, to be honest, In some cases the skeleton we create in Spine can be used as a template, and what we have as a texture region is only a placeholder for adding our own attachments. So, having each slot or texture region as a separate game object is desired to meet some requirements. I know this is a topic for another thread, but having a component for each slot gives more versatility.

Aye, though the only reason the information is "wrong" is because the shader expects the gameobject transform to describe the transform applied to the whole mesh.

A gameobject based Spine runtime has pros and cons. It may be easier to integrate with other Unity features, such as your shader, setting an object as a child of an attachment so it participates in the skeleton draw order and bone transform, etc. The downsides are that it may be awkward, if not complex, to have gameobjects generated under the skeleton for each attachment. I also worry about performance. Dynamic batching has a limitations for number of attributes, transform scale (this isn't clear) and multi-pass shaders. I haven't personally compared performance of a single mesh versus many gameobjects, but someone else on the forums reported it to be pretty slower (FWIW). Still, if a gameobject-based runtime is useful enough, maybe the downsides can be tolerated. I do plan to experiment with it.

I wonder about using it as a template. I think you would need to make your gameobject the child of a generated attachment gameobject, since the skeleton is going to be tracking and manipulating the attachment gameobject.

Thank you for looking further into this for us Unity users, Gallo.
To be clear, when we're talking about slots here you mean the Sprite's different parts or attachments, right? Just want to understand the problem well and see if we can experiment with the runtime too.

Nate, when a shader causes overlapping of the parts, like in the following example, what's usually the problem?

Image removed due to the lack of support for HTTPS. | Show Anyway

Slots aren't actually rendered, so I assume he meant region attachments.

It looks like you aren't using premultiplied alpha or aren't blending the premultiplied alpha correctly.

Yes, when I say slot I meant region attachment, hehe.

By example. As you can see in this video everything seems working, but if I change to an animation where bones aren't near it's default transformation and move the light, then appear the problem (see head and legs), region attachments pixels are shaded as if the slots aren't rotated by the bone, being affected only by the skeleton transformation.

http://www.youtube.com/watch?v=lmoVfzz5P_E

Yup. 🙁 I think the quickest solution is to pass the rotation to the shader for each attachment, then rotate the light. I believe that is what this code is doing, around line 213:
http://pastebin.com/8U6vWr0P

Well I finally found a way to pass the proper information to each vertex, getting the desired light.

http://www.youtube.com/watch?v=PyfMK6QSaVA

I'm using the "tangent" vector from the Mesh object to pass the rotation and scale of each vertex to the shader. Then in the shader I create a rotation matrix with those values, and then I transform the normal with this matrix.

Still pendent making it works with multiple lights, seems something wrong in the material but not in the shader. DirectionalLights require a modification too.

Nice. Good job getting it working right!

Woo! That looks great!

7 days later