All Posts in “javascript”

DDD 2018

Back in 2013 I worked on a project which mixed dance with technology called Artism. At some point during the live show, the audience would see a dancer moving on the stage and behind him a projection with a 3D character performing the exact same moves. The character was morphing between a man and an ape.

At the time I was working on visuals for another part of the show, but I had access to the motion capture file and kept a copy to do something with it one day.

The day arrived when I was invited to submit an experiment for DEVX, part of Digital Design Days 2018 in Milan.

Concept
The theme was ‘the monolith’ and the first thing that came to my mind was this scene from 2001: A Space Odyssey with all the apes jumping around a monolith.

Monolith, ape, man, dance. There was something there.

Visuals
The original mocap was an FBX file and contained only data about the bones, there was no skin so it couldn’t be used directly. It took me a while to figure out the best way to work with the data. A few months before, I was looking into ways of optimising Three.js’ own JSON format and even published a small package on npm about it. But I ended up not using it. I used glTF instead.

When I started I didn’t know how the model was going to be rendered. This is what I like about experiments, they don’t follow a set path, they just flow in the direction that suits them best as they go. In this case the coloured lines and ribbons in the final result appeared after several iterations and a lot of other unsuccessful ideas.

Tech
On the technical side, I think there are two interesting things to mention: one is how to find the positions of vertices influenced by bones, and the other is how to sort the vertices so that the line segments look pretty.

Transformed skin vertices
The vertices on a skinned mesh are transformed in real time, either because of morph targets or because of bones. In Three.js the positions are updated in the vertex shader, but in order to make elements (i.e. ribbons) follow some given vertices, I needed to know their positions in Javascript. I learned how to do that for morph targets in my Billie Deer project. I also figured out how to find the positions of the bones themselves by looking into the code of SkeletonHelper.

However, it was a different story for bones and skin weights. There are quite a few matrix operations involved. After hitting my head against the wall for a few days, I was saved by the right search term on Google which led me to this: https://stackoverflow.com/questions/31620194/how-to-calculate-transformed-skin-vertices. Thank you makc3d for unblocking the rest of this experiment.

Sorted body parts
Once a mesh is defined it is easy to change from drawing triangles to drawing lines or line segments. The challenge is to make the lines look good. Segments are drawn for pairs of vertices, so if in our model vertex 0 belongs to the right foot and vertex 1 belongs to the head, a straight line would be drawn across the model. If all the vertices are connected as such, we end up with a convex shape saturated with lines and the body becomes indistinguishable.

One way to improve that is to sort the vertices by their distance from each other in the first frame of the animation. It helps, but it is not enough. The best is to create a correspondence between a vertex and the body part it belongs to. Luckily, we can read body parts from the skeleton and we can check which vertex is influenced by which bone using skinWeight and skinIndex. For a given vertex on a skinned mesh, get the index associated with the strongest weight, then get the bone name for that index and the result is i.e. vertex 2714 is part of the pelvis.

Now vertices can be grouped by body parts and lines can be drawn inside those groups. If the technique was applied on its own, this is how it would look:

Credits
Choreography: Eric Nyira
Performance: David Gellura
Motion capture: Audiomotion
Music: Henri Texier
Sound FX: modified from sources on Freesound

And once again, I’ve added the konami code.
Go ahead and try it. Open the experiment and press:
up up down down left right left right b a

Where is the monolith?
It’s implicit =)

Check the experiment here: http://devx.ddd.it/en/experiment/1/

Circular Images

circular images

It was a sunny day in London. I was sitting under a tree and had this idea: what if I could scan an image draw the bright areas with circular lines? Bullshit. I stole it. This guy did it first.

I absolutely loved the visuals and had to do it myself. He tagged it as #processing but I didn’t find a sketch or a video so I could only guess how it worked. Once I got the basics working I had a few ideas for variations and threw some sliders in. It’s not completely new, but I like to think my small additions are valid.

There is more to explore. Maybe make the circles pulse with sound. Maybe add 3D. Maybe write a shader and run it over a video. These could all be really cool. If someone wants to try please go ahead and let me know what you did. I might try them too. But for now I just want to release this as is. A quick experiment.

I imagine this could be a nice artwork for an album cover. If you agree and happen to know just the band/artist, get in touch and I’ll be happy to work on a version with good print quality size.

Thank you @williamapan for the amazing ‘Shout’ photo.

Check the experiment here.

Teen Spirit

teen-spirit-01

Even before I finished my previous experiment I already knew the next one was going to be about sound. I wanted to do music visualization in the browser.

And one more time the result of the experiment is quite different from the initial idea. But I quite like that, it is one of nice things of experimentation, the direction can change any time something interesting appears in the process.

I started by playing around with Web Audio API and looking for references. I found this cool project called Decorated Playlists, a website ‘dedicated to the close relationship between music & design.’ One of the playlists – Run For Cover – had some nice visuals with bold lines coming towards the viewer with a strong perspective. I imagined how those lines could react to music and replicated them in code. On these initial tests I was using a song from that playlist called Espiritu Adolescente, by Mandrágora Tango Orchestra – which is a cool tango version of Nirvana’s Smells Like Teen Spirit. It was looking good, but nothing special. I tried a few variations here and there and eventually dropped the idea.

teen-spirit-03

Time for a new experiment. And a new song. I am a big fan of rock, so I started looking for the next tune in my own music library. I chose God Hates A Coward, by Tomahawk. I love this song. There is an awesome live version on YouTube where we can see Mike Patton barking the lyrics behind a mask.

That mask could be interesting to use in a visualization, so I started googling images of masks.
This one grabbed my attention. It seems to be a drawing based on this photo, but instead of the text on the cylinder, there are just lines. Once again I imagined how those lines could look when reacting to music.

teen-spirit-04

So I went to code to try to replicate that cylinder. I tried a few geometries in three.js, but I realized I need more control of the vertices. It was one of those moments when my brain just wouldn’t shut down. I remember figuring out how to do it on the street walking back from lunch. The solution was to divide the bars into segments and then stretch the vertices only until the limit of the segment. i.e. if a bar has 10 segments and the value it needs to represent is 0.96, it is not like the entire bar is scaled down to 0.96, with this system the first 9 segments are assuming values of 0.1 and only the last segment is scaled down to 0.06. Then those segments can be distributed around a circle and the shape is preserved for any value.

The more I saw those bars reacting to music in 3D, the further away I got from the idea of the mask. The circular bars had something on their own and I couldn’t stop playing with them. Eventually I dropped the idea of the mask. And dropped also the song I was using. In my tests I found that the bars were reacting much better to the Nirvana tango I was using earlier. At this point the two experiments merged.

I feel stupid
And contagious
Here we are now
Entertain us

I didn’t know exactly what to do with all those shapes. All I knew was that some of them were looking pretty cool at certain camera angles and light positions, so I started to create some scenes with my favourite settings. I have to say it was a constant battle in my head between using pre-defined scenes or make everything dynamic. Some people might just start clicking and close the experiment because it doesn’t react. But I wanted to make something tailored for that song. Something like Robert Hodgin’s Solar. Everything is generated by code and runs real-time in the browser, but it could also be a video.

teen-spirit-02

The creation of these scenes is what took most of the time. There was a lot of experimentation and a lot of stuff didn’t make it to the final version. Together with the sound reactive bars, I can say there were two other major accomplishments: one was to finally get my head around quaternions to be able to tween the camera smoothly – I should write another post about that I wrote on stackoverflow instead – and the other was to add the words ‘hello’ and ‘how low’ in a way that would fit well with the visuals.

I plan to explore these two topics a bit more in the future. And I definitely want to do more music visualization. Hopefully next time with some rock n’ roll!

Mind The Gap

mind-the-gap-02

After I finished my previous experiment with the Web Audio API I was looking for something else to do with sound. I had this idea of using London Underground’s data and playing a note every time a train left a station. I could assign a different note for each line and the live feed would create random music all day long. So I started checking TfL Developers’ Area and it didn’t take long to realize that my idea wouldn’t be possible. The data does show predicted arrival times for each station, but these are rounded to 30 seconds. If the experiment were to use the data literally, it would stay silent for 30s, then play a bunch of notes at the same time, then go back to silence for another 30s. A friend even suggested randomizing some values in between those 30s, but that wouldn’t be any different from just listening to some random notes chosen by the computer, without any connection to the tube.

OK, that idea was gone, but the data was quite interesting. With the rounded times I could tween the position of the trains between stations. It would be cool to see the trains moving in ‘almost’ real time on the screen, wouldn’t it? Oh wait, someone did it already: Live map of London Underground, by Matthew Somerville. And it is nice, but not really what I had in mind. I wanted more of a cool visualization based on the tube data, rather than an informative/useful map. How could I do something new with this data? Add a third dimension maybe? Three.js was on my list of things to experiment with for a long time and this seemed like the right opportunity. Oh wait, has someone done it already? The only thing I could find was this and it is definitely not what I had in mind. So yeah, green light!

mind-the-gap-01

To do a 3D map of the tube I would need x,y,z values. It is amazing what we can find on the internet, it’s all there. Depth of tube lines and London Underground Station Locations.

I had everything I needed: train times, latitude, longitude and depth of the stations. Those were coming from many different files, so I stretched my regex skills and created a simple tool with Adobe AIR to parse everything and output a consolidated .json for me. With that I could finally plot some points in space using the Mercator projection. The next step was to create tubes connecting these points and again I was really lucky to find exactly what I needed online. Three.js is an amazing library not only because of what it does, but also because of how it is made. Together with the classes I needed (TubeGeometry and SplineCurve3), I also found the conversation between the authors while they were developing these classes.

One of the biggest challenges was to make the trains follow the spline and sometimes change splines depending on the train’s destination. I feel that my algorithm could be more solid here, but it is working well. The last touches were to add the labels for each station and add some ambient sound recorded at the tube.

That’s it. I hope people find it interesting and play with it.

Launch the experiment.